00:00:00.001 Started by upstream project "autotest-per-patch" build number 132842 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.136 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.137 using credential 00000000-0000-0000-0000-000000000002 00:00:00.139 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.163 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.194 Using shallow fetch with depth 1 00:00:00.194 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.194 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.239 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.239 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.097 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.108 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.121 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.121 > git config core.sparsecheckout # timeout=10 00:00:06.132 > git read-tree -mu HEAD # timeout=10 00:00:06.148 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.167 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.167 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.261 [Pipeline] Start of Pipeline 00:00:06.275 [Pipeline] library 00:00:06.276 Loading library shm_lib@master 00:00:06.277 Library shm_lib@master is cached. Copying from home. 00:00:06.292 [Pipeline] node 00:00:06.305 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.307 [Pipeline] { 00:00:06.316 [Pipeline] catchError 00:00:06.318 [Pipeline] { 00:00:06.331 [Pipeline] wrap 00:00:06.340 [Pipeline] { 00:00:06.349 [Pipeline] stage 00:00:06.350 [Pipeline] { (Prologue) 00:00:06.368 [Pipeline] echo 00:00:06.370 Node: VM-host-SM9 00:00:06.377 [Pipeline] cleanWs 00:00:06.386 [WS-CLEANUP] Deleting project workspace... 00:00:06.386 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.393 [WS-CLEANUP] done 00:00:06.655 [Pipeline] setCustomBuildProperty 00:00:06.843 [Pipeline] httpRequest 00:00:07.239 [Pipeline] echo 00:00:07.241 Sorcerer 10.211.164.20 is alive 00:00:07.248 [Pipeline] retry 00:00:07.250 [Pipeline] { 00:00:07.263 [Pipeline] httpRequest 00:00:07.267 HttpMethod: GET 00:00:07.268 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.268 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.270 Response Code: HTTP/1.1 200 OK 00:00:07.270 Success: Status code 200 is in the accepted range: 200,404 00:00:07.271 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.215 [Pipeline] } 00:00:08.231 [Pipeline] // retry 00:00:08.238 [Pipeline] sh 00:00:08.522 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.537 [Pipeline] httpRequest 00:00:09.285 [Pipeline] echo 00:00:09.287 Sorcerer 10.211.164.20 is alive 00:00:09.297 [Pipeline] retry 00:00:09.299 [Pipeline] { 00:00:09.313 [Pipeline] httpRequest 00:00:09.318 HttpMethod: GET 00:00:09.319 URL: http://10.211.164.20/packages/spdk_97b0ef63e5ae781f03290bb390dec176a83aaa41.tar.gz 00:00:09.319 Sending request to url: http://10.211.164.20/packages/spdk_97b0ef63e5ae781f03290bb390dec176a83aaa41.tar.gz 00:00:09.341 Response Code: HTTP/1.1 200 OK 00:00:09.341 Success: Status code 200 is in the accepted range: 200,404 00:00:09.342 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_97b0ef63e5ae781f03290bb390dec176a83aaa41.tar.gz 00:01:21.627 [Pipeline] } 00:01:21.644 [Pipeline] // retry 00:01:21.652 [Pipeline] sh 00:01:21.933 + tar --no-same-owner -xf spdk_97b0ef63e5ae781f03290bb390dec176a83aaa41.tar.gz 00:01:24.479 [Pipeline] sh 00:01:24.760 + git -C spdk log --oneline -n5 00:01:24.760 97b0ef63e nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:24.760 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:24.760 66289a6db build: use VERSION file for storing version 00:01:24.760 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:24.760 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:24.778 [Pipeline] writeFile 00:01:24.792 [Pipeline] sh 00:01:25.073 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.085 [Pipeline] sh 00:01:25.365 + cat autorun-spdk.conf 00:01:25.365 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.365 SPDK_TEST_NVMF=1 00:01:25.365 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.365 SPDK_TEST_URING=1 00:01:25.365 SPDK_TEST_USDT=1 00:01:25.365 SPDK_RUN_UBSAN=1 00:01:25.365 NET_TYPE=virt 00:01:25.365 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.373 RUN_NIGHTLY=0 00:01:25.375 [Pipeline] } 00:01:25.388 [Pipeline] // stage 00:01:25.402 [Pipeline] stage 00:01:25.404 [Pipeline] { (Run VM) 00:01:25.416 [Pipeline] sh 00:01:25.697 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.697 + echo 'Start stage prepare_nvme.sh' 00:01:25.697 Start stage prepare_nvme.sh 00:01:25.697 + [[ -n 0 ]] 00:01:25.697 + disk_prefix=ex0 00:01:25.697 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:25.697 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:25.697 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:25.697 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.697 ++ SPDK_TEST_NVMF=1 00:01:25.697 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.697 ++ SPDK_TEST_URING=1 00:01:25.697 ++ SPDK_TEST_USDT=1 00:01:25.697 ++ SPDK_RUN_UBSAN=1 00:01:25.697 ++ NET_TYPE=virt 00:01:25.697 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.697 ++ RUN_NIGHTLY=0 00:01:25.697 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:25.697 + nvme_files=() 00:01:25.697 + declare -A nvme_files 00:01:25.697 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.697 + nvme_files['nvme.img']=5G 00:01:25.697 + nvme_files['nvme-cmb.img']=5G 00:01:25.697 + nvme_files['nvme-multi0.img']=4G 00:01:25.697 + nvme_files['nvme-multi1.img']=4G 00:01:25.697 + nvme_files['nvme-multi2.img']=4G 00:01:25.697 + nvme_files['nvme-openstack.img']=8G 00:01:25.697 + nvme_files['nvme-zns.img']=5G 00:01:25.697 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.697 + (( SPDK_TEST_FTL == 1 )) 00:01:25.697 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.697 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.697 + for nvme in "${!nvme_files[@]}" 00:01:25.697 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:25.697 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.697 + for nvme in "${!nvme_files[@]}" 00:01:25.697 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:25.697 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.697 + for nvme in "${!nvme_files[@]}" 00:01:25.697 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:25.956 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.956 + for nvme in "${!nvme_files[@]}" 00:01:25.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:25.956 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.956 + for nvme in "${!nvme_files[@]}" 00:01:25.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:25.956 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.956 + for nvme in "${!nvme_files[@]}" 00:01:25.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:25.956 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.216 + for nvme in "${!nvme_files[@]}" 00:01:26.216 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:26.216 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.216 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:26.216 + echo 'End stage prepare_nvme.sh' 00:01:26.216 End stage prepare_nvme.sh 00:01:26.227 [Pipeline] sh 00:01:26.507 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.507 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:26.767 00:01:26.767 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:26.767 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:26.767 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:26.767 HELP=0 00:01:26.767 DRY_RUN=0 00:01:26.767 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:26.767 NVME_DISKS_TYPE=nvme,nvme, 00:01:26.767 NVME_AUTO_CREATE=0 00:01:26.767 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:26.767 NVME_CMB=,, 00:01:26.767 NVME_PMR=,, 00:01:26.767 NVME_ZNS=,, 00:01:26.767 NVME_MS=,, 00:01:26.767 NVME_FDP=,, 00:01:26.767 SPDK_VAGRANT_DISTRO=fedora39 00:01:26.767 SPDK_VAGRANT_VMCPU=10 00:01:26.767 SPDK_VAGRANT_VMRAM=12288 00:01:26.767 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.767 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:26.767 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.767 SPDK_OPENSTACK_NETWORK=0 00:01:26.767 VAGRANT_PACKAGE_BOX=0 00:01:26.767 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.767 FORCE_DISTRO=true 00:01:26.767 VAGRANT_BOX_VERSION= 00:01:26.767 EXTRA_VAGRANTFILES= 00:01:26.767 NIC_MODEL=e1000 00:01:26.767 00:01:26.767 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:26.767 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:29.301 Bringing machine 'default' up with 'libvirt' provider... 00:01:30.235 ==> default: Creating image (snapshot of base box volume). 00:01:30.235 ==> default: Creating domain with the following settings... 00:01:30.235 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733906137_376d61effcdec97fcb15 00:01:30.235 ==> default: -- Domain type: kvm 00:01:30.235 ==> default: -- Cpus: 10 00:01:30.235 ==> default: -- Feature: acpi 00:01:30.236 ==> default: -- Feature: apic 00:01:30.236 ==> default: -- Feature: pae 00:01:30.236 ==> default: -- Memory: 12288M 00:01:30.236 ==> default: -- Memory Backing: hugepages: 00:01:30.236 ==> default: -- Management MAC: 00:01:30.236 ==> default: -- Loader: 00:01:30.236 ==> default: -- Nvram: 00:01:30.236 ==> default: -- Base box: spdk/fedora39 00:01:30.236 ==> default: -- Storage pool: default 00:01:30.236 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733906137_376d61effcdec97fcb15.img (20G) 00:01:30.236 ==> default: -- Volume Cache: default 00:01:30.236 ==> default: -- Kernel: 00:01:30.236 ==> default: -- Initrd: 00:01:30.236 ==> default: -- Graphics Type: vnc 00:01:30.236 ==> default: -- Graphics Port: -1 00:01:30.236 ==> default: -- Graphics IP: 127.0.0.1 00:01:30.236 ==> default: -- Graphics Password: Not defined 00:01:30.236 ==> default: -- Video Type: cirrus 00:01:30.236 ==> default: -- Video VRAM: 9216 00:01:30.236 ==> default: -- Sound Type: 00:01:30.236 ==> default: -- Keymap: en-us 00:01:30.236 ==> default: -- TPM Path: 00:01:30.236 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:30.236 ==> default: -- Command line args: 00:01:30.236 ==> default: -> value=-device, 00:01:30.236 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:30.236 ==> default: -> value=-drive, 00:01:30.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:30.236 ==> default: -> value=-device, 00:01:30.236 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.236 ==> default: -> value=-device, 00:01:30.236 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:30.236 ==> default: -> value=-drive, 00:01:30.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:30.236 ==> default: -> value=-device, 00:01:30.236 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.236 ==> default: -> value=-drive, 00:01:30.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:30.236 ==> default: -> value=-device, 00:01:30.236 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.236 ==> default: -> value=-drive, 00:01:30.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:30.236 ==> default: -> value=-device, 00:01:30.236 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.236 ==> default: Creating shared folders metadata... 00:01:30.236 ==> default: Starting domain. 00:01:31.616 ==> default: Waiting for domain to get an IP address... 00:01:49.814 ==> default: Waiting for SSH to become available... 00:01:49.814 ==> default: Configuring and enabling network interfaces... 00:01:54.008 default: SSH address: 192.168.121.101:22 00:01:54.008 default: SSH username: vagrant 00:01:54.008 default: SSH auth method: private key 00:01:55.915 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:04.035 ==> default: Mounting SSHFS shared folder... 00:02:05.414 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:05.414 ==> default: Checking Mount.. 00:02:06.792 ==> default: Folder Successfully Mounted! 00:02:06.792 ==> default: Running provisioner: file... 00:02:07.728 default: ~/.gitconfig => .gitconfig 00:02:07.988 00:02:07.988 SUCCESS! 00:02:07.988 00:02:07.988 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:07.988 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.988 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:07.988 00:02:07.997 [Pipeline] } 00:02:08.012 [Pipeline] // stage 00:02:08.021 [Pipeline] dir 00:02:08.021 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:08.023 [Pipeline] { 00:02:08.035 [Pipeline] catchError 00:02:08.037 [Pipeline] { 00:02:08.049 [Pipeline] sh 00:02:08.330 + vagrant ssh-config --host vagrant 00:02:08.330 + sed -ne /^Host/,$p 00:02:08.330 + tee ssh_conf 00:02:11.616 Host vagrant 00:02:11.616 HostName 192.168.121.101 00:02:11.616 User vagrant 00:02:11.616 Port 22 00:02:11.616 UserKnownHostsFile /dev/null 00:02:11.616 StrictHostKeyChecking no 00:02:11.616 PasswordAuthentication no 00:02:11.616 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:11.616 IdentitiesOnly yes 00:02:11.616 LogLevel FATAL 00:02:11.616 ForwardAgent yes 00:02:11.616 ForwardX11 yes 00:02:11.616 00:02:11.630 [Pipeline] withEnv 00:02:11.632 [Pipeline] { 00:02:11.646 [Pipeline] sh 00:02:11.927 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:11.927 source /etc/os-release 00:02:11.927 [[ -e /image.version ]] && img=$(< /image.version) 00:02:11.927 # Minimal, systemd-like check. 00:02:11.927 if [[ -e /.dockerenv ]]; then 00:02:11.927 # Clear garbage from the node's name: 00:02:11.927 # agt-er_autotest_547-896 -> autotest_547-896 00:02:11.927 # $HOSTNAME is the actual container id 00:02:11.927 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:11.927 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:11.927 # We can assume this is a mount from a host where container is running, 00:02:11.927 # so fetch its hostname to easily identify the target swarm worker. 00:02:11.927 container="$(< /etc/hostname) ($agent)" 00:02:11.927 else 00:02:11.927 # Fallback 00:02:11.927 container=$agent 00:02:11.927 fi 00:02:11.927 fi 00:02:11.927 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:11.927 00:02:12.198 [Pipeline] } 00:02:12.214 [Pipeline] // withEnv 00:02:12.222 [Pipeline] setCustomBuildProperty 00:02:12.237 [Pipeline] stage 00:02:12.239 [Pipeline] { (Tests) 00:02:12.257 [Pipeline] sh 00:02:12.538 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:12.810 [Pipeline] sh 00:02:13.252 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:13.268 [Pipeline] timeout 00:02:13.269 Timeout set to expire in 1 hr 0 min 00:02:13.271 [Pipeline] { 00:02:13.287 [Pipeline] sh 00:02:13.569 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:14.137 HEAD is now at 97b0ef63e nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:14.149 [Pipeline] sh 00:02:14.430 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:14.702 [Pipeline] sh 00:02:14.983 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:15.258 [Pipeline] sh 00:02:15.539 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:15.799 ++ readlink -f spdk_repo 00:02:15.799 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:15.799 + [[ -n /home/vagrant/spdk_repo ]] 00:02:15.799 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:15.799 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:15.799 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:15.799 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:15.799 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:15.799 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:15.799 + cd /home/vagrant/spdk_repo 00:02:15.799 + source /etc/os-release 00:02:15.799 ++ NAME='Fedora Linux' 00:02:15.799 ++ VERSION='39 (Cloud Edition)' 00:02:15.799 ++ ID=fedora 00:02:15.799 ++ VERSION_ID=39 00:02:15.799 ++ VERSION_CODENAME= 00:02:15.799 ++ PLATFORM_ID=platform:f39 00:02:15.799 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:15.799 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:15.799 ++ LOGO=fedora-logo-icon 00:02:15.799 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:15.799 ++ HOME_URL=https://fedoraproject.org/ 00:02:15.799 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:15.799 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:15.799 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:15.799 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:15.799 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:15.799 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:15.799 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:15.799 ++ SUPPORT_END=2024-11-12 00:02:15.799 ++ VARIANT='Cloud Edition' 00:02:15.799 ++ VARIANT_ID=cloud 00:02:15.799 + uname -a 00:02:15.799 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:15.799 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:16.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:16.317 Hugepages 00:02:16.317 node hugesize free / total 00:02:16.317 node0 1048576kB 0 / 0 00:02:16.317 node0 2048kB 0 / 0 00:02:16.317 00:02:16.317 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.317 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:16.317 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:16.317 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:16.317 + rm -f /tmp/spdk-ld-path 00:02:16.317 + source autorun-spdk.conf 00:02:16.317 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.317 ++ SPDK_TEST_NVMF=1 00:02:16.317 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.317 ++ SPDK_TEST_URING=1 00:02:16.317 ++ SPDK_TEST_USDT=1 00:02:16.317 ++ SPDK_RUN_UBSAN=1 00:02:16.317 ++ NET_TYPE=virt 00:02:16.317 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:16.317 ++ RUN_NIGHTLY=0 00:02:16.317 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:16.317 + [[ -n '' ]] 00:02:16.317 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:16.317 + for M in /var/spdk/build-*-manifest.txt 00:02:16.317 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:16.317 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:16.317 + for M in /var/spdk/build-*-manifest.txt 00:02:16.317 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:16.317 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:16.317 + for M in /var/spdk/build-*-manifest.txt 00:02:16.317 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:16.317 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:16.317 ++ uname 00:02:16.317 + [[ Linux == \L\i\n\u\x ]] 00:02:16.317 + sudo dmesg -T 00:02:16.317 + sudo dmesg --clear 00:02:16.317 + dmesg_pid=5259 00:02:16.317 + sudo dmesg -Tw 00:02:16.317 + [[ Fedora Linux == FreeBSD ]] 00:02:16.317 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.317 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.317 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:16.317 + [[ -x /usr/src/fio-static/fio ]] 00:02:16.317 + export FIO_BIN=/usr/src/fio-static/fio 00:02:16.317 + FIO_BIN=/usr/src/fio-static/fio 00:02:16.317 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:16.317 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:16.317 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:16.317 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.317 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.317 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:16.317 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.317 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.317 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.577 08:36:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:16.577 08:36:24 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:16.577 08:36:24 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:16.577 08:36:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:16.577 08:36:24 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:16.577 08:36:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:16.577 08:36:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:16.577 08:36:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:16.577 08:36:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:16.577 08:36:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.577 08:36:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.577 08:36:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.577 08:36:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.577 08:36:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.577 08:36:24 -- paths/export.sh@5 -- $ export PATH 00:02:16.577 08:36:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.577 08:36:24 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:16.577 08:36:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:16.577 08:36:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733906184.XXXXXX 00:02:16.577 08:36:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733906184.aWzDnQ 00:02:16.577 08:36:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:16.577 08:36:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:16.577 08:36:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:16.577 08:36:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:16.577 08:36:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:16.577 08:36:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:16.577 08:36:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:16.577 08:36:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.577 08:36:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:16.577 08:36:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:16.577 08:36:24 -- pm/common@17 -- $ local monitor 00:02:16.577 08:36:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.577 08:36:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.577 08:36:24 -- pm/common@21 -- $ date +%s 00:02:16.577 08:36:24 -- pm/common@25 -- $ sleep 1 00:02:16.577 08:36:24 -- pm/common@21 -- $ date +%s 00:02:16.577 08:36:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733906184 00:02:16.577 08:36:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733906184 00:02:16.577 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733906184_collect-cpu-load.pm.log 00:02:16.577 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733906184_collect-vmstat.pm.log 00:02:17.514 08:36:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:17.514 08:36:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:17.514 08:36:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:17.514 08:36:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:17.514 08:36:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:17.514 Wed Dec 11 08:36:25 AM UTC 2024 00:02:17.514 08:36:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:17.514 v25.01-pre-332-g97b0ef63e 00:02:17.514 08:36:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:17.514 08:36:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:17.514 08:36:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:17.514 08:36:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.514 08:36:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.514 08:36:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.514 ************************************ 00:02:17.514 START TEST ubsan 00:02:17.514 ************************************ 00:02:17.514 using ubsan 00:02:17.514 08:36:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:17.514 00:02:17.514 real 0m0.000s 00:02:17.514 user 0m0.000s 00:02:17.514 sys 0m0.000s 00:02:17.514 08:36:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:17.514 08:36:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.514 ************************************ 00:02:17.514 END TEST ubsan 00:02:17.514 ************************************ 00:02:17.772 08:36:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:17.772 08:36:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:17.772 08:36:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:17.772 08:36:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:17.772 08:36:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:17.772 08:36:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:17.772 08:36:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:17.772 08:36:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:17.772 08:36:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:17.772 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:17.772 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:18.337 Using 'verbs' RDMA provider 00:02:34.153 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:46.368 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:46.368 Creating mk/config.mk...done. 00:02:46.368 Creating mk/cc.flags.mk...done. 00:02:46.368 Type 'make' to build. 00:02:46.368 08:36:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:46.368 08:36:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.368 08:36:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.368 08:36:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.368 ************************************ 00:02:46.368 START TEST make 00:02:46.368 ************************************ 00:02:46.368 08:36:54 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:58.599 The Meson build system 00:02:58.599 Version: 1.5.0 00:02:58.599 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:58.599 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:58.599 Build type: native build 00:02:58.600 Program cat found: YES (/usr/bin/cat) 00:02:58.600 Project name: DPDK 00:02:58.600 Project version: 24.03.0 00:02:58.600 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:58.600 C linker for the host machine: cc ld.bfd 2.40-14 00:02:58.600 Host machine cpu family: x86_64 00:02:58.600 Host machine cpu: x86_64 00:02:58.600 Message: ## Building in Developer Mode ## 00:02:58.600 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:58.600 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:58.600 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:58.600 Program python3 found: YES (/usr/bin/python3) 00:02:58.600 Program cat found: YES (/usr/bin/cat) 00:02:58.600 Compiler for C supports arguments -march=native: YES 00:02:58.600 Checking for size of "void *" : 8 00:02:58.600 Checking for size of "void *" : 8 (cached) 00:02:58.600 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:58.600 Library m found: YES 00:02:58.600 Library numa found: YES 00:02:58.600 Has header "numaif.h" : YES 00:02:58.600 Library fdt found: NO 00:02:58.600 Library execinfo found: NO 00:02:58.600 Has header "execinfo.h" : YES 00:02:58.600 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:58.600 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:58.600 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:58.600 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:58.600 Run-time dependency openssl found: YES 3.1.1 00:02:58.600 Run-time dependency libpcap found: YES 1.10.4 00:02:58.600 Has header "pcap.h" with dependency libpcap: YES 00:02:58.600 Compiler for C supports arguments -Wcast-qual: YES 00:02:58.600 Compiler for C supports arguments -Wdeprecated: YES 00:02:58.600 Compiler for C supports arguments -Wformat: YES 00:02:58.600 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:58.600 Compiler for C supports arguments -Wformat-security: NO 00:02:58.600 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:58.600 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:58.600 Compiler for C supports arguments -Wnested-externs: YES 00:02:58.600 Compiler for C supports arguments -Wold-style-definition: YES 00:02:58.600 Compiler for C supports arguments -Wpointer-arith: YES 00:02:58.600 Compiler for C supports arguments -Wsign-compare: YES 00:02:58.600 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:58.600 Compiler for C supports arguments -Wundef: YES 00:02:58.600 Compiler for C supports arguments -Wwrite-strings: YES 00:02:58.600 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:58.600 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:58.600 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:58.600 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:58.600 Program objdump found: YES (/usr/bin/objdump) 00:02:58.600 Compiler for C supports arguments -mavx512f: YES 00:02:58.600 Checking if "AVX512 checking" compiles: YES 00:02:58.600 Fetching value of define "__SSE4_2__" : 1 00:02:58.600 Fetching value of define "__AES__" : 1 00:02:58.600 Fetching value of define "__AVX__" : 1 00:02:58.600 Fetching value of define "__AVX2__" : 1 00:02:58.600 Fetching value of define "__AVX512BW__" : (undefined) 00:02:58.600 Fetching value of define "__AVX512CD__" : (undefined) 00:02:58.600 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:58.600 Fetching value of define "__AVX512F__" : (undefined) 00:02:58.600 Fetching value of define "__AVX512VL__" : (undefined) 00:02:58.600 Fetching value of define "__PCLMUL__" : 1 00:02:58.600 Fetching value of define "__RDRND__" : 1 00:02:58.600 Fetching value of define "__RDSEED__" : 1 00:02:58.600 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:58.600 Fetching value of define "__znver1__" : (undefined) 00:02:58.600 Fetching value of define "__znver2__" : (undefined) 00:02:58.600 Fetching value of define "__znver3__" : (undefined) 00:02:58.600 Fetching value of define "__znver4__" : (undefined) 00:02:58.600 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:58.600 Message: lib/log: Defining dependency "log" 00:02:58.600 Message: lib/kvargs: Defining dependency "kvargs" 00:02:58.600 Message: lib/telemetry: Defining dependency "telemetry" 00:02:58.600 Checking for function "getentropy" : NO 00:02:58.600 Message: lib/eal: Defining dependency "eal" 00:02:58.600 Message: lib/ring: Defining dependency "ring" 00:02:58.600 Message: lib/rcu: Defining dependency "rcu" 00:02:58.600 Message: lib/mempool: Defining dependency "mempool" 00:02:58.600 Message: lib/mbuf: Defining dependency "mbuf" 00:02:58.600 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:58.600 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:58.600 Compiler for C supports arguments -mpclmul: YES 00:02:58.600 Compiler for C supports arguments -maes: YES 00:02:58.600 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:58.600 Compiler for C supports arguments -mavx512bw: YES 00:02:58.600 Compiler for C supports arguments -mavx512dq: YES 00:02:58.600 Compiler for C supports arguments -mavx512vl: YES 00:02:58.600 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:58.600 Compiler for C supports arguments -mavx2: YES 00:02:58.600 Compiler for C supports arguments -mavx: YES 00:02:58.600 Message: lib/net: Defining dependency "net" 00:02:58.600 Message: lib/meter: Defining dependency "meter" 00:02:58.600 Message: lib/ethdev: Defining dependency "ethdev" 00:02:58.600 Message: lib/pci: Defining dependency "pci" 00:02:58.600 Message: lib/cmdline: Defining dependency "cmdline" 00:02:58.600 Message: lib/hash: Defining dependency "hash" 00:02:58.600 Message: lib/timer: Defining dependency "timer" 00:02:58.600 Message: lib/compressdev: Defining dependency "compressdev" 00:02:58.600 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:58.600 Message: lib/dmadev: Defining dependency "dmadev" 00:02:58.600 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:58.600 Message: lib/power: Defining dependency "power" 00:02:58.600 Message: lib/reorder: Defining dependency "reorder" 00:02:58.600 Message: lib/security: Defining dependency "security" 00:02:58.600 Has header "linux/userfaultfd.h" : YES 00:02:58.600 Has header "linux/vduse.h" : YES 00:02:58.600 Message: lib/vhost: Defining dependency "vhost" 00:02:58.600 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:58.600 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:58.600 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:58.600 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:58.600 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:58.600 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:58.600 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:58.600 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:58.600 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:58.600 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:58.600 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:58.600 Configuring doxy-api-html.conf using configuration 00:02:58.600 Configuring doxy-api-man.conf using configuration 00:02:58.600 Program mandb found: YES (/usr/bin/mandb) 00:02:58.600 Program sphinx-build found: NO 00:02:58.600 Configuring rte_build_config.h using configuration 00:02:58.600 Message: 00:02:58.600 ================= 00:02:58.600 Applications Enabled 00:02:58.600 ================= 00:02:58.600 00:02:58.600 apps: 00:02:58.600 00:02:58.600 00:02:58.600 Message: 00:02:58.600 ================= 00:02:58.600 Libraries Enabled 00:02:58.600 ================= 00:02:58.600 00:02:58.600 libs: 00:02:58.600 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:58.600 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:58.600 cryptodev, dmadev, power, reorder, security, vhost, 00:02:58.600 00:02:58.600 Message: 00:02:58.600 =============== 00:02:58.600 Drivers Enabled 00:02:58.600 =============== 00:02:58.600 00:02:58.600 common: 00:02:58.600 00:02:58.600 bus: 00:02:58.600 pci, vdev, 00:02:58.600 mempool: 00:02:58.600 ring, 00:02:58.600 dma: 00:02:58.600 00:02:58.600 net: 00:02:58.600 00:02:58.600 crypto: 00:02:58.600 00:02:58.600 compress: 00:02:58.600 00:02:58.600 vdpa: 00:02:58.600 00:02:58.600 00:02:58.600 Message: 00:02:58.600 ================= 00:02:58.600 Content Skipped 00:02:58.600 ================= 00:02:58.600 00:02:58.600 apps: 00:02:58.600 dumpcap: explicitly disabled via build config 00:02:58.600 graph: explicitly disabled via build config 00:02:58.600 pdump: explicitly disabled via build config 00:02:58.600 proc-info: explicitly disabled via build config 00:02:58.600 test-acl: explicitly disabled via build config 00:02:58.600 test-bbdev: explicitly disabled via build config 00:02:58.600 test-cmdline: explicitly disabled via build config 00:02:58.600 test-compress-perf: explicitly disabled via build config 00:02:58.600 test-crypto-perf: explicitly disabled via build config 00:02:58.600 test-dma-perf: explicitly disabled via build config 00:02:58.600 test-eventdev: explicitly disabled via build config 00:02:58.600 test-fib: explicitly disabled via build config 00:02:58.600 test-flow-perf: explicitly disabled via build config 00:02:58.600 test-gpudev: explicitly disabled via build config 00:02:58.600 test-mldev: explicitly disabled via build config 00:02:58.600 test-pipeline: explicitly disabled via build config 00:02:58.600 test-pmd: explicitly disabled via build config 00:02:58.600 test-regex: explicitly disabled via build config 00:02:58.600 test-sad: explicitly disabled via build config 00:02:58.600 test-security-perf: explicitly disabled via build config 00:02:58.600 00:02:58.600 libs: 00:02:58.600 argparse: explicitly disabled via build config 00:02:58.601 metrics: explicitly disabled via build config 00:02:58.601 acl: explicitly disabled via build config 00:02:58.601 bbdev: explicitly disabled via build config 00:02:58.601 bitratestats: explicitly disabled via build config 00:02:58.601 bpf: explicitly disabled via build config 00:02:58.601 cfgfile: explicitly disabled via build config 00:02:58.601 distributor: explicitly disabled via build config 00:02:58.601 efd: explicitly disabled via build config 00:02:58.601 eventdev: explicitly disabled via build config 00:02:58.601 dispatcher: explicitly disabled via build config 00:02:58.601 gpudev: explicitly disabled via build config 00:02:58.601 gro: explicitly disabled via build config 00:02:58.601 gso: explicitly disabled via build config 00:02:58.601 ip_frag: explicitly disabled via build config 00:02:58.601 jobstats: explicitly disabled via build config 00:02:58.601 latencystats: explicitly disabled via build config 00:02:58.601 lpm: explicitly disabled via build config 00:02:58.601 member: explicitly disabled via build config 00:02:58.601 pcapng: explicitly disabled via build config 00:02:58.601 rawdev: explicitly disabled via build config 00:02:58.601 regexdev: explicitly disabled via build config 00:02:58.601 mldev: explicitly disabled via build config 00:02:58.601 rib: explicitly disabled via build config 00:02:58.601 sched: explicitly disabled via build config 00:02:58.601 stack: explicitly disabled via build config 00:02:58.601 ipsec: explicitly disabled via build config 00:02:58.601 pdcp: explicitly disabled via build config 00:02:58.601 fib: explicitly disabled via build config 00:02:58.601 port: explicitly disabled via build config 00:02:58.601 pdump: explicitly disabled via build config 00:02:58.601 table: explicitly disabled via build config 00:02:58.601 pipeline: explicitly disabled via build config 00:02:58.601 graph: explicitly disabled via build config 00:02:58.601 node: explicitly disabled via build config 00:02:58.601 00:02:58.601 drivers: 00:02:58.601 common/cpt: not in enabled drivers build config 00:02:58.601 common/dpaax: not in enabled drivers build config 00:02:58.601 common/iavf: not in enabled drivers build config 00:02:58.601 common/idpf: not in enabled drivers build config 00:02:58.601 common/ionic: not in enabled drivers build config 00:02:58.601 common/mvep: not in enabled drivers build config 00:02:58.601 common/octeontx: not in enabled drivers build config 00:02:58.601 bus/auxiliary: not in enabled drivers build config 00:02:58.601 bus/cdx: not in enabled drivers build config 00:02:58.601 bus/dpaa: not in enabled drivers build config 00:02:58.601 bus/fslmc: not in enabled drivers build config 00:02:58.601 bus/ifpga: not in enabled drivers build config 00:02:58.601 bus/platform: not in enabled drivers build config 00:02:58.601 bus/uacce: not in enabled drivers build config 00:02:58.601 bus/vmbus: not in enabled drivers build config 00:02:58.601 common/cnxk: not in enabled drivers build config 00:02:58.601 common/mlx5: not in enabled drivers build config 00:02:58.601 common/nfp: not in enabled drivers build config 00:02:58.601 common/nitrox: not in enabled drivers build config 00:02:58.601 common/qat: not in enabled drivers build config 00:02:58.601 common/sfc_efx: not in enabled drivers build config 00:02:58.601 mempool/bucket: not in enabled drivers build config 00:02:58.601 mempool/cnxk: not in enabled drivers build config 00:02:58.601 mempool/dpaa: not in enabled drivers build config 00:02:58.601 mempool/dpaa2: not in enabled drivers build config 00:02:58.601 mempool/octeontx: not in enabled drivers build config 00:02:58.601 mempool/stack: not in enabled drivers build config 00:02:58.601 dma/cnxk: not in enabled drivers build config 00:02:58.601 dma/dpaa: not in enabled drivers build config 00:02:58.601 dma/dpaa2: not in enabled drivers build config 00:02:58.601 dma/hisilicon: not in enabled drivers build config 00:02:58.601 dma/idxd: not in enabled drivers build config 00:02:58.601 dma/ioat: not in enabled drivers build config 00:02:58.601 dma/skeleton: not in enabled drivers build config 00:02:58.601 net/af_packet: not in enabled drivers build config 00:02:58.601 net/af_xdp: not in enabled drivers build config 00:02:58.601 net/ark: not in enabled drivers build config 00:02:58.601 net/atlantic: not in enabled drivers build config 00:02:58.601 net/avp: not in enabled drivers build config 00:02:58.601 net/axgbe: not in enabled drivers build config 00:02:58.601 net/bnx2x: not in enabled drivers build config 00:02:58.601 net/bnxt: not in enabled drivers build config 00:02:58.601 net/bonding: not in enabled drivers build config 00:02:58.601 net/cnxk: not in enabled drivers build config 00:02:58.601 net/cpfl: not in enabled drivers build config 00:02:58.601 net/cxgbe: not in enabled drivers build config 00:02:58.601 net/dpaa: not in enabled drivers build config 00:02:58.601 net/dpaa2: not in enabled drivers build config 00:02:58.601 net/e1000: not in enabled drivers build config 00:02:58.601 net/ena: not in enabled drivers build config 00:02:58.601 net/enetc: not in enabled drivers build config 00:02:58.601 net/enetfec: not in enabled drivers build config 00:02:58.601 net/enic: not in enabled drivers build config 00:02:58.601 net/failsafe: not in enabled drivers build config 00:02:58.601 net/fm10k: not in enabled drivers build config 00:02:58.601 net/gve: not in enabled drivers build config 00:02:58.601 net/hinic: not in enabled drivers build config 00:02:58.601 net/hns3: not in enabled drivers build config 00:02:58.601 net/i40e: not in enabled drivers build config 00:02:58.601 net/iavf: not in enabled drivers build config 00:02:58.601 net/ice: not in enabled drivers build config 00:02:58.601 net/idpf: not in enabled drivers build config 00:02:58.601 net/igc: not in enabled drivers build config 00:02:58.601 net/ionic: not in enabled drivers build config 00:02:58.601 net/ipn3ke: not in enabled drivers build config 00:02:58.601 net/ixgbe: not in enabled drivers build config 00:02:58.601 net/mana: not in enabled drivers build config 00:02:58.601 net/memif: not in enabled drivers build config 00:02:58.601 net/mlx4: not in enabled drivers build config 00:02:58.601 net/mlx5: not in enabled drivers build config 00:02:58.601 net/mvneta: not in enabled drivers build config 00:02:58.601 net/mvpp2: not in enabled drivers build config 00:02:58.601 net/netvsc: not in enabled drivers build config 00:02:58.601 net/nfb: not in enabled drivers build config 00:02:58.601 net/nfp: not in enabled drivers build config 00:02:58.601 net/ngbe: not in enabled drivers build config 00:02:58.601 net/null: not in enabled drivers build config 00:02:58.601 net/octeontx: not in enabled drivers build config 00:02:58.601 net/octeon_ep: not in enabled drivers build config 00:02:58.601 net/pcap: not in enabled drivers build config 00:02:58.601 net/pfe: not in enabled drivers build config 00:02:58.601 net/qede: not in enabled drivers build config 00:02:58.601 net/ring: not in enabled drivers build config 00:02:58.601 net/sfc: not in enabled drivers build config 00:02:58.601 net/softnic: not in enabled drivers build config 00:02:58.601 net/tap: not in enabled drivers build config 00:02:58.601 net/thunderx: not in enabled drivers build config 00:02:58.601 net/txgbe: not in enabled drivers build config 00:02:58.601 net/vdev_netvsc: not in enabled drivers build config 00:02:58.601 net/vhost: not in enabled drivers build config 00:02:58.601 net/virtio: not in enabled drivers build config 00:02:58.601 net/vmxnet3: not in enabled drivers build config 00:02:58.601 raw/*: missing internal dependency, "rawdev" 00:02:58.601 crypto/armv8: not in enabled drivers build config 00:02:58.601 crypto/bcmfs: not in enabled drivers build config 00:02:58.601 crypto/caam_jr: not in enabled drivers build config 00:02:58.601 crypto/ccp: not in enabled drivers build config 00:02:58.601 crypto/cnxk: not in enabled drivers build config 00:02:58.601 crypto/dpaa_sec: not in enabled drivers build config 00:02:58.601 crypto/dpaa2_sec: not in enabled drivers build config 00:02:58.601 crypto/ipsec_mb: not in enabled drivers build config 00:02:58.601 crypto/mlx5: not in enabled drivers build config 00:02:58.601 crypto/mvsam: not in enabled drivers build config 00:02:58.601 crypto/nitrox: not in enabled drivers build config 00:02:58.601 crypto/null: not in enabled drivers build config 00:02:58.601 crypto/octeontx: not in enabled drivers build config 00:02:58.601 crypto/openssl: not in enabled drivers build config 00:02:58.601 crypto/scheduler: not in enabled drivers build config 00:02:58.601 crypto/uadk: not in enabled drivers build config 00:02:58.601 crypto/virtio: not in enabled drivers build config 00:02:58.601 compress/isal: not in enabled drivers build config 00:02:58.601 compress/mlx5: not in enabled drivers build config 00:02:58.601 compress/nitrox: not in enabled drivers build config 00:02:58.601 compress/octeontx: not in enabled drivers build config 00:02:58.601 compress/zlib: not in enabled drivers build config 00:02:58.601 regex/*: missing internal dependency, "regexdev" 00:02:58.601 ml/*: missing internal dependency, "mldev" 00:02:58.601 vdpa/ifc: not in enabled drivers build config 00:02:58.601 vdpa/mlx5: not in enabled drivers build config 00:02:58.601 vdpa/nfp: not in enabled drivers build config 00:02:58.601 vdpa/sfc: not in enabled drivers build config 00:02:58.601 event/*: missing internal dependency, "eventdev" 00:02:58.601 baseband/*: missing internal dependency, "bbdev" 00:02:58.601 gpu/*: missing internal dependency, "gpudev" 00:02:58.601 00:02:58.601 00:02:58.601 Build targets in project: 85 00:02:58.601 00:02:58.601 DPDK 24.03.0 00:02:58.601 00:02:58.601 User defined options 00:02:58.601 buildtype : debug 00:02:58.601 default_library : shared 00:02:58.601 libdir : lib 00:02:58.601 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:58.601 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:58.601 c_link_args : 00:02:58.601 cpu_instruction_set: native 00:02:58.601 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:58.601 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:58.601 enable_docs : false 00:02:58.602 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:58.602 enable_kmods : false 00:02:58.602 max_lcores : 128 00:02:58.602 tests : false 00:02:58.602 00:02:58.602 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.170 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:59.170 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:59.170 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.429 [3/268] Linking static target lib/librte_kvargs.a 00:02:59.429 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.429 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.429 [6/268] Linking static target lib/librte_log.a 00:02:59.687 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.946 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.946 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.946 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:59.946 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.204 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.204 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.204 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.204 [15/268] Linking static target lib/librte_telemetry.a 00:03:00.204 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.204 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.204 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.463 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.463 [20/268] Linking target lib/librte_log.so.24.1 00:03:00.463 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:00.721 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:00.980 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:00.980 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.980 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.980 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:00.980 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.980 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.980 [29/268] Linking target lib/librte_telemetry.so.24.1 00:03:01.239 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:01.239 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:01.239 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:01.239 [33/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:01.239 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.239 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:01.497 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:01.497 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:02.065 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.065 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.065 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:02.065 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:02.065 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.065 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:02.065 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:02.065 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.065 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.323 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.323 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.323 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.323 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.582 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:02.841 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:02.841 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:02.841 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:02.841 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:03.099 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:03.099 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:03.099 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:03.359 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:03.359 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:03.359 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:03.359 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:03.617 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:03.617 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:03.617 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:03.875 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.133 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.133 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.392 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.392 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.392 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.392 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.392 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.651 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.651 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.651 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.651 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.909 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.909 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.909 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:05.168 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:05.168 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:05.427 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.427 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:05.427 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:05.427 [86/268] Linking static target lib/librte_ring.a 00:03:05.427 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:05.427 [88/268] Linking static target lib/librte_eal.a 00:03:05.427 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:05.427 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.686 [91/268] Linking static target lib/librte_rcu.a 00:03:05.686 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:05.686 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:05.686 [94/268] Linking static target lib/librte_mempool.a 00:03:05.944 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.945 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:05.945 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:05.945 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:05.945 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:06.203 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.203 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:06.203 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.203 [103/268] Linking static target lib/librte_mbuf.a 00:03:06.462 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.721 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.721 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.721 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:06.721 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:06.721 [109/268] Linking static target lib/librte_meter.a 00:03:06.721 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:06.721 [111/268] Linking static target lib/librte_net.a 00:03:06.980 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.240 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.240 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:07.240 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:07.240 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.240 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.240 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:07.240 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:07.812 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:07.812 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:08.071 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:08.071 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:08.071 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:08.330 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:08.330 [126/268] Linking static target lib/librte_pci.a 00:03:08.330 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:08.589 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:08.589 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:08.589 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:08.589 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:08.589 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.848 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:08.848 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:08.848 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:08.848 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:08.848 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:08.848 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:08.848 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:08.848 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:08.848 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:08.848 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:08.848 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:08.848 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:08.848 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:08.848 [146/268] Linking static target lib/librte_ethdev.a 00:03:08.848 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:09.417 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:09.417 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:09.417 [150/268] Linking static target lib/librte_cmdline.a 00:03:09.417 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:09.417 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:09.677 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:09.936 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:09.936 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:09.936 [156/268] Linking static target lib/librte_timer.a 00:03:10.195 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:10.195 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:10.195 [159/268] Linking static target lib/librte_hash.a 00:03:10.195 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.455 [161/268] Linking static target lib/librte_compressdev.a 00:03:10.455 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.455 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:10.455 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.455 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:10.714 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:10.973 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.973 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:10.973 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.973 [170/268] Linking static target lib/librte_dmadev.a 00:03:10.973 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:10.973 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:10.973 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.233 [174/268] Linking static target lib/librte_cryptodev.a 00:03:11.233 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:11.233 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.233 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.802 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:11.802 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:11.802 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:11.802 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:11.802 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:11.802 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.802 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:12.061 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:12.061 [186/268] Linking static target lib/librte_power.a 00:03:12.320 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:12.580 [188/268] Linking static target lib/librte_reorder.a 00:03:12.580 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:12.580 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:12.580 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:12.580 [192/268] Linking static target lib/librte_security.a 00:03:12.839 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:12.839 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:13.098 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.358 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.358 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.358 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:13.617 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:13.617 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.617 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:13.876 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:14.136 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:14.136 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:14.136 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:14.136 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:14.395 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:14.395 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:14.396 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:14.396 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:14.674 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:14.674 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:14.674 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:14.674 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:14.674 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:14.674 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:14.674 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:14.674 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:14.674 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:14.941 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:14.941 [221/268] Linking static target drivers/librte_bus_vdev.a 00:03:14.941 [222/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:14.941 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:14.941 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:14.941 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:14.941 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:14.941 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.201 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.139 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:16.139 [230/268] Linking static target lib/librte_vhost.a 00:03:16.398 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.398 [232/268] Linking target lib/librte_eal.so.24.1 00:03:16.657 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:16.657 [234/268] Linking target lib/librte_timer.so.24.1 00:03:16.657 [235/268] Linking target lib/librte_ring.so.24.1 00:03:16.657 [236/268] Linking target lib/librte_meter.so.24.1 00:03:16.657 [237/268] Linking target lib/librte_pci.so.24.1 00:03:16.657 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:16.657 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:16.657 [240/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.657 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:16.657 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:16.916 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:16.916 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:16.916 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:16.916 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:16.916 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:16.916 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:16.916 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:16.916 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:16.916 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:16.916 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:17.175 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:17.175 [254/268] Linking target lib/librte_reorder.so.24.1 00:03:17.175 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:17.175 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:17.175 [257/268] Linking target lib/librte_net.so.24.1 00:03:17.434 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:17.434 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:17.434 [260/268] Linking target lib/librte_security.so.24.1 00:03:17.434 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:17.434 [262/268] Linking target lib/librte_hash.so.24.1 00:03:17.434 [263/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.434 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:17.434 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:17.434 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:17.693 [267/268] Linking target lib/librte_power.so.24.1 00:03:17.693 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:17.693 INFO: autodetecting backend as ninja 00:03:17.693 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:39.627 CC lib/ut/ut.o 00:03:39.627 CC lib/ut_mock/mock.o 00:03:39.627 CC lib/log/log.o 00:03:39.627 CC lib/log/log_flags.o 00:03:39.627 CC lib/log/log_deprecated.o 00:03:39.627 LIB libspdk_ut_mock.a 00:03:39.627 LIB libspdk_ut.a 00:03:39.627 LIB libspdk_log.a 00:03:39.627 SO libspdk_ut_mock.so.6.0 00:03:39.627 SO libspdk_ut.so.2.0 00:03:39.627 SO libspdk_log.so.7.1 00:03:39.627 SYMLINK libspdk_ut.so 00:03:39.627 SYMLINK libspdk_ut_mock.so 00:03:39.627 SYMLINK libspdk_log.so 00:03:39.627 CC lib/util/base64.o 00:03:39.627 CC lib/util/bit_array.o 00:03:39.627 CXX lib/trace_parser/trace.o 00:03:39.627 CC lib/util/cpuset.o 00:03:39.627 CC lib/ioat/ioat.o 00:03:39.627 CC lib/util/crc16.o 00:03:39.627 CC lib/dma/dma.o 00:03:39.627 CC lib/util/crc32.o 00:03:39.627 CC lib/util/crc32c.o 00:03:39.627 CC lib/vfio_user/host/vfio_user_pci.o 00:03:39.627 CC lib/vfio_user/host/vfio_user.o 00:03:39.627 CC lib/util/crc32_ieee.o 00:03:39.885 CC lib/util/crc64.o 00:03:39.885 CC lib/util/dif.o 00:03:39.885 LIB libspdk_dma.a 00:03:39.885 CC lib/util/fd.o 00:03:39.885 CC lib/util/fd_group.o 00:03:39.885 SO libspdk_dma.so.5.0 00:03:39.885 CC lib/util/file.o 00:03:39.885 LIB libspdk_ioat.a 00:03:39.885 SYMLINK libspdk_dma.so 00:03:39.885 CC lib/util/hexlify.o 00:03:39.885 CC lib/util/iov.o 00:03:39.885 CC lib/util/math.o 00:03:39.885 SO libspdk_ioat.so.7.0 00:03:39.885 CC lib/util/net.o 00:03:39.885 LIB libspdk_vfio_user.a 00:03:40.144 SYMLINK libspdk_ioat.so 00:03:40.144 CC lib/util/pipe.o 00:03:40.144 SO libspdk_vfio_user.so.5.0 00:03:40.144 CC lib/util/strerror_tls.o 00:03:40.144 CC lib/util/string.o 00:03:40.144 CC lib/util/uuid.o 00:03:40.144 SYMLINK libspdk_vfio_user.so 00:03:40.144 CC lib/util/xor.o 00:03:40.144 CC lib/util/zipf.o 00:03:40.144 CC lib/util/md5.o 00:03:40.403 LIB libspdk_util.a 00:03:40.403 SO libspdk_util.so.10.1 00:03:40.662 LIB libspdk_trace_parser.a 00:03:40.662 SO libspdk_trace_parser.so.6.0 00:03:40.662 SYMLINK libspdk_util.so 00:03:40.662 SYMLINK libspdk_trace_parser.so 00:03:40.921 CC lib/json/json_parse.o 00:03:40.921 CC lib/env_dpdk/env.o 00:03:40.921 CC lib/json/json_util.o 00:03:40.921 CC lib/rdma_utils/rdma_utils.o 00:03:40.921 CC lib/env_dpdk/memory.o 00:03:40.921 CC lib/env_dpdk/pci.o 00:03:40.921 CC lib/idxd/idxd.o 00:03:40.921 CC lib/json/json_write.o 00:03:40.921 CC lib/conf/conf.o 00:03:40.921 CC lib/vmd/vmd.o 00:03:41.180 CC lib/vmd/led.o 00:03:41.180 LIB libspdk_conf.a 00:03:41.180 CC lib/env_dpdk/init.o 00:03:41.180 SO libspdk_conf.so.6.0 00:03:41.180 LIB libspdk_rdma_utils.a 00:03:41.180 LIB libspdk_json.a 00:03:41.180 SO libspdk_rdma_utils.so.1.0 00:03:41.180 SYMLINK libspdk_conf.so 00:03:41.180 SO libspdk_json.so.6.0 00:03:41.180 CC lib/idxd/idxd_user.o 00:03:41.180 SYMLINK libspdk_rdma_utils.so 00:03:41.180 CC lib/idxd/idxd_kernel.o 00:03:41.180 CC lib/env_dpdk/threads.o 00:03:41.180 SYMLINK libspdk_json.so 00:03:41.180 CC lib/env_dpdk/pci_ioat.o 00:03:41.439 CC lib/env_dpdk/pci_virtio.o 00:03:41.439 CC lib/env_dpdk/pci_vmd.o 00:03:41.439 CC lib/rdma_provider/common.o 00:03:41.439 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:41.439 LIB libspdk_idxd.a 00:03:41.439 CC lib/env_dpdk/pci_idxd.o 00:03:41.439 SO libspdk_idxd.so.12.1 00:03:41.439 LIB libspdk_vmd.a 00:03:41.439 CC lib/env_dpdk/pci_event.o 00:03:41.439 SO libspdk_vmd.so.6.0 00:03:41.439 SYMLINK libspdk_idxd.so 00:03:41.439 CC lib/env_dpdk/sigbus_handler.o 00:03:41.439 CC lib/env_dpdk/pci_dpdk.o 00:03:41.439 CC lib/jsonrpc/jsonrpc_server.o 00:03:41.698 SYMLINK libspdk_vmd.so 00:03:41.698 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:41.698 CC lib/jsonrpc/jsonrpc_client.o 00:03:41.698 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:41.698 LIB libspdk_rdma_provider.a 00:03:41.698 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.698 SO libspdk_rdma_provider.so.7.0 00:03:41.698 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.698 SYMLINK libspdk_rdma_provider.so 00:03:41.957 LIB libspdk_jsonrpc.a 00:03:41.957 SO libspdk_jsonrpc.so.6.0 00:03:41.957 SYMLINK libspdk_jsonrpc.so 00:03:42.216 LIB libspdk_env_dpdk.a 00:03:42.216 CC lib/rpc/rpc.o 00:03:42.216 SO libspdk_env_dpdk.so.15.1 00:03:42.474 SYMLINK libspdk_env_dpdk.so 00:03:42.474 LIB libspdk_rpc.a 00:03:42.475 SO libspdk_rpc.so.6.0 00:03:42.475 SYMLINK libspdk_rpc.so 00:03:42.734 CC lib/keyring/keyring.o 00:03:42.734 CC lib/keyring/keyring_rpc.o 00:03:42.734 CC lib/trace/trace_rpc.o 00:03:42.734 CC lib/trace/trace.o 00:03:42.734 CC lib/trace/trace_flags.o 00:03:42.734 CC lib/notify/notify.o 00:03:42.734 CC lib/notify/notify_rpc.o 00:03:42.993 LIB libspdk_notify.a 00:03:42.993 SO libspdk_notify.so.6.0 00:03:42.993 LIB libspdk_trace.a 00:03:42.993 LIB libspdk_keyring.a 00:03:42.993 SO libspdk_keyring.so.2.0 00:03:42.993 SO libspdk_trace.so.11.0 00:03:42.993 SYMLINK libspdk_notify.so 00:03:42.993 SYMLINK libspdk_keyring.so 00:03:42.993 SYMLINK libspdk_trace.so 00:03:43.560 CC lib/thread/thread.o 00:03:43.560 CC lib/thread/iobuf.o 00:03:43.560 CC lib/sock/sock.o 00:03:43.560 CC lib/sock/sock_rpc.o 00:03:43.819 LIB libspdk_sock.a 00:03:43.819 SO libspdk_sock.so.10.0 00:03:44.077 SYMLINK libspdk_sock.so 00:03:44.336 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:44.336 CC lib/nvme/nvme_ctrlr.o 00:03:44.336 CC lib/nvme/nvme_fabric.o 00:03:44.336 CC lib/nvme/nvme_ns_cmd.o 00:03:44.336 CC lib/nvme/nvme_ns.o 00:03:44.336 CC lib/nvme/nvme_pcie.o 00:03:44.336 CC lib/nvme/nvme_pcie_common.o 00:03:44.336 CC lib/nvme/nvme.o 00:03:44.336 CC lib/nvme/nvme_qpair.o 00:03:44.904 LIB libspdk_thread.a 00:03:44.904 SO libspdk_thread.so.11.0 00:03:45.162 SYMLINK libspdk_thread.so 00:03:45.162 CC lib/nvme/nvme_quirks.o 00:03:45.162 CC lib/nvme/nvme_transport.o 00:03:45.162 CC lib/nvme/nvme_discovery.o 00:03:45.162 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:45.162 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:45.162 CC lib/nvme/nvme_tcp.o 00:03:45.421 CC lib/nvme/nvme_opal.o 00:03:45.421 CC lib/accel/accel.o 00:03:45.421 CC lib/accel/accel_rpc.o 00:03:45.680 CC lib/nvme/nvme_io_msg.o 00:03:45.680 CC lib/accel/accel_sw.o 00:03:45.680 CC lib/nvme/nvme_poll_group.o 00:03:45.680 CC lib/nvme/nvme_zns.o 00:03:45.939 CC lib/nvme/nvme_stubs.o 00:03:45.939 CC lib/nvme/nvme_auth.o 00:03:45.939 CC lib/nvme/nvme_cuse.o 00:03:45.939 CC lib/nvme/nvme_rdma.o 00:03:46.506 CC lib/blob/blobstore.o 00:03:46.506 LIB libspdk_accel.a 00:03:46.506 SO libspdk_accel.so.16.0 00:03:46.506 CC lib/init/json_config.o 00:03:46.506 CC lib/virtio/virtio.o 00:03:46.506 SYMLINK libspdk_accel.so 00:03:46.506 CC lib/init/subsystem.o 00:03:46.765 CC lib/blob/request.o 00:03:46.765 CC lib/fsdev/fsdev.o 00:03:46.765 CC lib/fsdev/fsdev_io.o 00:03:46.765 CC lib/fsdev/fsdev_rpc.o 00:03:46.765 CC lib/init/subsystem_rpc.o 00:03:47.023 CC lib/blob/zeroes.o 00:03:47.023 CC lib/virtio/virtio_vhost_user.o 00:03:47.023 CC lib/blob/blob_bs_dev.o 00:03:47.023 CC lib/init/rpc.o 00:03:47.023 CC lib/virtio/virtio_vfio_user.o 00:03:47.023 CC lib/bdev/bdev.o 00:03:47.023 CC lib/virtio/virtio_pci.o 00:03:47.282 LIB libspdk_init.a 00:03:47.282 CC lib/bdev/bdev_rpc.o 00:03:47.282 CC lib/bdev/bdev_zone.o 00:03:47.282 SO libspdk_init.so.6.0 00:03:47.282 CC lib/bdev/part.o 00:03:47.282 CC lib/bdev/scsi_nvme.o 00:03:47.282 LIB libspdk_fsdev.a 00:03:47.282 SYMLINK libspdk_init.so 00:03:47.282 SO libspdk_fsdev.so.2.0 00:03:47.282 LIB libspdk_virtio.a 00:03:47.282 SYMLINK libspdk_fsdev.so 00:03:47.540 SO libspdk_virtio.so.7.0 00:03:47.540 LIB libspdk_nvme.a 00:03:47.540 CC lib/event/app.o 00:03:47.540 CC lib/event/reactor.o 00:03:47.540 CC lib/event/log_rpc.o 00:03:47.540 SYMLINK libspdk_virtio.so 00:03:47.540 CC lib/event/app_rpc.o 00:03:47.540 CC lib/event/scheduler_static.o 00:03:47.540 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:47.540 SO libspdk_nvme.so.15.0 00:03:47.799 LIB libspdk_event.a 00:03:47.799 SYMLINK libspdk_nvme.so 00:03:48.059 SO libspdk_event.so.14.0 00:03:48.059 SYMLINK libspdk_event.so 00:03:48.318 LIB libspdk_fuse_dispatcher.a 00:03:48.318 SO libspdk_fuse_dispatcher.so.1.0 00:03:48.318 SYMLINK libspdk_fuse_dispatcher.so 00:03:49.255 LIB libspdk_blob.a 00:03:49.255 SO libspdk_blob.so.12.0 00:03:49.514 SYMLINK libspdk_blob.so 00:03:49.514 LIB libspdk_bdev.a 00:03:49.514 SO libspdk_bdev.so.17.0 00:03:49.773 CC lib/lvol/lvol.o 00:03:49.773 CC lib/blobfs/blobfs.o 00:03:49.773 CC lib/blobfs/tree.o 00:03:49.773 SYMLINK libspdk_bdev.so 00:03:50.032 CC lib/scsi/dev.o 00:03:50.032 CC lib/scsi/lun.o 00:03:50.032 CC lib/ftl/ftl_core.o 00:03:50.032 CC lib/ftl/ftl_init.o 00:03:50.032 CC lib/scsi/port.o 00:03:50.032 CC lib/nvmf/ctrlr.o 00:03:50.032 CC lib/ublk/ublk.o 00:03:50.032 CC lib/nbd/nbd.o 00:03:50.032 CC lib/nbd/nbd_rpc.o 00:03:50.290 CC lib/ftl/ftl_layout.o 00:03:50.290 CC lib/ublk/ublk_rpc.o 00:03:50.290 CC lib/scsi/scsi.o 00:03:50.291 CC lib/scsi/scsi_bdev.o 00:03:50.291 CC lib/scsi/scsi_pr.o 00:03:50.291 CC lib/nvmf/ctrlr_discovery.o 00:03:50.549 CC lib/nvmf/ctrlr_bdev.o 00:03:50.549 LIB libspdk_nbd.a 00:03:50.549 SO libspdk_nbd.so.7.0 00:03:50.549 LIB libspdk_blobfs.a 00:03:50.549 SO libspdk_blobfs.so.11.0 00:03:50.549 SYMLINK libspdk_nbd.so 00:03:50.549 CC lib/scsi/scsi_rpc.o 00:03:50.549 CC lib/ftl/ftl_debug.o 00:03:50.549 LIB libspdk_lvol.a 00:03:50.549 SYMLINK libspdk_blobfs.so 00:03:50.549 CC lib/ftl/ftl_io.o 00:03:50.549 LIB libspdk_ublk.a 00:03:50.549 SO libspdk_lvol.so.11.0 00:03:50.549 SO libspdk_ublk.so.3.0 00:03:50.813 SYMLINK libspdk_lvol.so 00:03:50.813 CC lib/ftl/ftl_sb.o 00:03:50.813 CC lib/scsi/task.o 00:03:50.813 SYMLINK libspdk_ublk.so 00:03:50.813 CC lib/ftl/ftl_l2p.o 00:03:50.813 CC lib/ftl/ftl_l2p_flat.o 00:03:50.813 CC lib/ftl/ftl_nv_cache.o 00:03:50.813 CC lib/ftl/ftl_band.o 00:03:50.813 CC lib/ftl/ftl_band_ops.o 00:03:50.813 CC lib/nvmf/subsystem.o 00:03:50.813 CC lib/ftl/ftl_writer.o 00:03:50.813 LIB libspdk_scsi.a 00:03:50.813 CC lib/ftl/ftl_rq.o 00:03:50.813 CC lib/ftl/ftl_reloc.o 00:03:51.095 SO libspdk_scsi.so.9.0 00:03:51.095 SYMLINK libspdk_scsi.so 00:03:51.095 CC lib/ftl/ftl_l2p_cache.o 00:03:51.095 CC lib/nvmf/nvmf.o 00:03:51.095 CC lib/nvmf/nvmf_rpc.o 00:03:51.095 CC lib/nvmf/transport.o 00:03:51.095 CC lib/nvmf/tcp.o 00:03:51.095 CC lib/nvmf/stubs.o 00:03:51.367 CC lib/nvmf/mdns_server.o 00:03:51.626 CC lib/nvmf/rdma.o 00:03:51.626 CC lib/nvmf/auth.o 00:03:51.626 CC lib/ftl/ftl_p2l.o 00:03:51.885 CC lib/ftl/ftl_p2l_log.o 00:03:51.885 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.886 CC lib/iscsi/conn.o 00:03:51.886 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.886 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:52.144 CC lib/iscsi/init_grp.o 00:03:52.144 CC lib/iscsi/iscsi.o 00:03:52.144 CC lib/iscsi/param.o 00:03:52.144 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:52.144 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:52.144 CC lib/iscsi/portal_grp.o 00:03:52.411 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:52.411 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:52.411 CC lib/iscsi/tgt_node.o 00:03:52.411 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:52.411 CC lib/iscsi/iscsi_subsystem.o 00:03:52.411 CC lib/iscsi/iscsi_rpc.o 00:03:52.674 CC lib/vhost/vhost.o 00:03:52.674 CC lib/vhost/vhost_rpc.o 00:03:52.674 CC lib/vhost/vhost_scsi.o 00:03:52.674 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:52.674 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:52.932 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.932 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.932 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.932 CC lib/vhost/vhost_blk.o 00:03:52.932 CC lib/vhost/rte_vhost_user.o 00:03:53.191 CC lib/ftl/utils/ftl_conf.o 00:03:53.191 CC lib/iscsi/task.o 00:03:53.450 CC lib/ftl/utils/ftl_md.o 00:03:53.450 CC lib/ftl/utils/ftl_mempool.o 00:03:53.450 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.450 CC lib/ftl/utils/ftl_property.o 00:03:53.450 LIB libspdk_iscsi.a 00:03:53.450 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.450 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.708 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.708 SO libspdk_iscsi.so.8.0 00:03:53.708 LIB libspdk_nvmf.a 00:03:53.708 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.708 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.708 SYMLINK libspdk_iscsi.so 00:03:53.708 SO libspdk_nvmf.so.20.0 00:03:53.708 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.708 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:53.708 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.708 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.967 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.967 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.967 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:53.967 SYMLINK libspdk_nvmf.so 00:03:53.967 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:53.967 CC lib/ftl/base/ftl_base_dev.o 00:03:53.967 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.967 CC lib/ftl/ftl_trace.o 00:03:54.226 LIB libspdk_vhost.a 00:03:54.226 SO libspdk_vhost.so.8.0 00:03:54.226 SYMLINK libspdk_vhost.so 00:03:54.226 LIB libspdk_ftl.a 00:03:54.485 SO libspdk_ftl.so.9.0 00:03:54.744 SYMLINK libspdk_ftl.so 00:03:55.003 CC module/env_dpdk/env_dpdk_rpc.o 00:03:55.262 CC module/accel/iaa/accel_iaa.o 00:03:55.262 CC module/accel/dsa/accel_dsa.o 00:03:55.262 CC module/blob/bdev/blob_bdev.o 00:03:55.262 CC module/accel/ioat/accel_ioat.o 00:03:55.262 CC module/accel/error/accel_error.o 00:03:55.262 CC module/sock/posix/posix.o 00:03:55.262 CC module/fsdev/aio/fsdev_aio.o 00:03:55.262 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:55.262 CC module/keyring/file/keyring.o 00:03:55.262 LIB libspdk_env_dpdk_rpc.a 00:03:55.262 SO libspdk_env_dpdk_rpc.so.6.0 00:03:55.262 SYMLINK libspdk_env_dpdk_rpc.so 00:03:55.262 CC module/accel/error/accel_error_rpc.o 00:03:55.262 CC module/keyring/file/keyring_rpc.o 00:03:55.262 CC module/accel/ioat/accel_ioat_rpc.o 00:03:55.262 CC module/accel/iaa/accel_iaa_rpc.o 00:03:55.262 LIB libspdk_scheduler_dynamic.a 00:03:55.262 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:55.521 SO libspdk_scheduler_dynamic.so.4.0 00:03:55.521 LIB libspdk_blob_bdev.a 00:03:55.521 CC module/accel/dsa/accel_dsa_rpc.o 00:03:55.521 LIB libspdk_accel_error.a 00:03:55.521 SO libspdk_blob_bdev.so.12.0 00:03:55.521 LIB libspdk_keyring_file.a 00:03:55.521 SYMLINK libspdk_scheduler_dynamic.so 00:03:55.521 SO libspdk_accel_error.so.2.0 00:03:55.521 SO libspdk_keyring_file.so.2.0 00:03:55.521 LIB libspdk_accel_ioat.a 00:03:55.521 LIB libspdk_accel_iaa.a 00:03:55.521 SYMLINK libspdk_blob_bdev.so 00:03:55.521 SO libspdk_accel_ioat.so.6.0 00:03:55.521 SYMLINK libspdk_accel_error.so 00:03:55.521 SO libspdk_accel_iaa.so.3.0 00:03:55.521 SYMLINK libspdk_keyring_file.so 00:03:55.521 LIB libspdk_accel_dsa.a 00:03:55.521 SYMLINK libspdk_accel_ioat.so 00:03:55.521 CC module/fsdev/aio/linux_aio_mgr.o 00:03:55.521 SYMLINK libspdk_accel_iaa.so 00:03:55.521 SO libspdk_accel_dsa.so.5.0 00:03:55.780 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:55.780 SYMLINK libspdk_accel_dsa.so 00:03:55.780 CC module/keyring/linux/keyring.o 00:03:55.780 CC module/sock/uring/uring.o 00:03:55.780 CC module/keyring/linux/keyring_rpc.o 00:03:55.780 CC module/scheduler/gscheduler/gscheduler.o 00:03:55.780 LIB libspdk_fsdev_aio.a 00:03:55.780 CC module/bdev/delay/vbdev_delay.o 00:03:55.780 LIB libspdk_scheduler_dpdk_governor.a 00:03:55.780 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:55.780 SO libspdk_fsdev_aio.so.1.0 00:03:55.780 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.780 LIB libspdk_sock_posix.a 00:03:55.780 CC module/bdev/error/vbdev_error.o 00:03:56.039 SO libspdk_sock_posix.so.6.0 00:03:56.039 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.039 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:56.039 LIB libspdk_keyring_linux.a 00:03:56.039 CC module/bdev/error/vbdev_error_rpc.o 00:03:56.039 SYMLINK libspdk_fsdev_aio.so 00:03:56.039 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:56.039 SO libspdk_keyring_linux.so.1.0 00:03:56.039 LIB libspdk_scheduler_gscheduler.a 00:03:56.039 SYMLINK libspdk_sock_posix.so 00:03:56.039 SO libspdk_scheduler_gscheduler.so.4.0 00:03:56.039 SYMLINK libspdk_keyring_linux.so 00:03:56.039 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.039 LIB libspdk_blobfs_bdev.a 00:03:56.039 SO libspdk_blobfs_bdev.so.6.0 00:03:56.298 LIB libspdk_bdev_error.a 00:03:56.298 CC module/bdev/gpt/gpt.o 00:03:56.298 SO libspdk_bdev_error.so.6.0 00:03:56.298 CC module/bdev/lvol/vbdev_lvol.o 00:03:56.298 LIB libspdk_bdev_delay.a 00:03:56.298 SYMLINK libspdk_blobfs_bdev.so 00:03:56.298 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:56.298 CC module/bdev/malloc/bdev_malloc.o 00:03:56.298 CC module/bdev/null/bdev_null.o 00:03:56.298 SO libspdk_bdev_delay.so.6.0 00:03:56.298 SYMLINK libspdk_bdev_error.so 00:03:56.298 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:56.298 CC module/bdev/nvme/bdev_nvme.o 00:03:56.298 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.298 SYMLINK libspdk_bdev_delay.so 00:03:56.298 CC module/bdev/null/bdev_null_rpc.o 00:03:56.298 CC module/bdev/gpt/vbdev_gpt.o 00:03:56.556 LIB libspdk_sock_uring.a 00:03:56.556 SO libspdk_sock_uring.so.5.0 00:03:56.556 LIB libspdk_bdev_null.a 00:03:56.556 SO libspdk_bdev_null.so.6.0 00:03:56.556 SYMLINK libspdk_sock_uring.so 00:03:56.556 LIB libspdk_bdev_malloc.a 00:03:56.556 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.556 SYMLINK libspdk_bdev_null.so 00:03:56.556 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:56.556 CC module/bdev/raid/bdev_raid.o 00:03:56.556 SO libspdk_bdev_malloc.so.6.0 00:03:56.850 LIB libspdk_bdev_gpt.a 00:03:56.850 CC module/bdev/split/vbdev_split.o 00:03:56.850 SO libspdk_bdev_gpt.so.6.0 00:03:56.850 LIB libspdk_bdev_lvol.a 00:03:56.850 SYMLINK libspdk_bdev_malloc.so 00:03:56.850 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.850 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.850 SYMLINK libspdk_bdev_gpt.so 00:03:56.850 SO libspdk_bdev_lvol.so.6.0 00:03:56.850 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.850 CC module/bdev/uring/bdev_uring.o 00:03:56.850 LIB libspdk_bdev_passthru.a 00:03:56.850 SYMLINK libspdk_bdev_lvol.so 00:03:56.850 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.850 SO libspdk_bdev_passthru.so.6.0 00:03:56.850 SYMLINK libspdk_bdev_passthru.so 00:03:56.850 CC module/bdev/nvme/nvme_rpc.o 00:03:56.850 CC module/bdev/nvme/bdev_mdns_client.o 00:03:57.108 CC module/bdev/nvme/vbdev_opal.o 00:03:57.108 LIB libspdk_bdev_split.a 00:03:57.109 SO libspdk_bdev_split.so.6.0 00:03:57.109 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.109 SYMLINK libspdk_bdev_split.so 00:03:57.109 CC module/bdev/uring/bdev_uring_rpc.o 00:03:57.109 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:57.109 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:57.109 CC module/bdev/raid/raid0.o 00:03:57.109 CC module/bdev/raid/raid1.o 00:03:57.367 CC module/bdev/raid/concat.o 00:03:57.367 LIB libspdk_bdev_zone_block.a 00:03:57.367 LIB libspdk_bdev_uring.a 00:03:57.367 SO libspdk_bdev_zone_block.so.6.0 00:03:57.367 SO libspdk_bdev_uring.so.6.0 00:03:57.367 SYMLINK libspdk_bdev_zone_block.so 00:03:57.367 SYMLINK libspdk_bdev_uring.so 00:03:57.367 CC module/bdev/iscsi/bdev_iscsi.o 00:03:57.367 CC module/bdev/aio/bdev_aio.o 00:03:57.367 CC module/bdev/ftl/bdev_ftl.o 00:03:57.367 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:57.367 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:57.367 CC module/bdev/aio/bdev_aio_rpc.o 00:03:57.626 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:57.626 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:57.626 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:57.626 LIB libspdk_bdev_raid.a 00:03:57.885 SO libspdk_bdev_raid.so.6.0 00:03:57.885 LIB libspdk_bdev_ftl.a 00:03:57.885 SYMLINK libspdk_bdev_raid.so 00:03:57.885 SO libspdk_bdev_ftl.so.6.0 00:03:57.885 LIB libspdk_bdev_aio.a 00:03:57.885 LIB libspdk_bdev_iscsi.a 00:03:57.885 SO libspdk_bdev_aio.so.6.0 00:03:57.885 SO libspdk_bdev_iscsi.so.6.0 00:03:57.885 SYMLINK libspdk_bdev_ftl.so 00:03:57.885 SYMLINK libspdk_bdev_aio.so 00:03:57.885 SYMLINK libspdk_bdev_iscsi.so 00:03:58.143 LIB libspdk_bdev_virtio.a 00:03:58.143 SO libspdk_bdev_virtio.so.6.0 00:03:58.143 SYMLINK libspdk_bdev_virtio.so 00:03:58.712 LIB libspdk_bdev_nvme.a 00:03:58.971 SO libspdk_bdev_nvme.so.7.1 00:03:58.971 SYMLINK libspdk_bdev_nvme.so 00:03:59.539 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.539 CC module/event/subsystems/fsdev/fsdev.o 00:03:59.539 CC module/event/subsystems/vmd/vmd.o 00:03:59.539 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.539 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.539 CC module/event/subsystems/sock/sock.o 00:03:59.539 CC module/event/subsystems/keyring/keyring.o 00:03:59.539 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.539 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.539 LIB libspdk_event_fsdev.a 00:03:59.539 LIB libspdk_event_keyring.a 00:03:59.539 SO libspdk_event_fsdev.so.1.0 00:03:59.539 SO libspdk_event_keyring.so.1.0 00:03:59.539 LIB libspdk_event_vhost_blk.a 00:03:59.539 LIB libspdk_event_vmd.a 00:03:59.539 LIB libspdk_event_scheduler.a 00:03:59.539 LIB libspdk_event_sock.a 00:03:59.539 SO libspdk_event_vhost_blk.so.3.0 00:03:59.539 SO libspdk_event_sock.so.5.0 00:03:59.539 SO libspdk_event_scheduler.so.4.0 00:03:59.539 LIB libspdk_event_iobuf.a 00:03:59.539 SO libspdk_event_vmd.so.6.0 00:03:59.798 SYMLINK libspdk_event_keyring.so 00:03:59.798 SYMLINK libspdk_event_fsdev.so 00:03:59.798 SO libspdk_event_iobuf.so.3.0 00:03:59.798 SYMLINK libspdk_event_vhost_blk.so 00:03:59.798 SYMLINK libspdk_event_sock.so 00:03:59.798 SYMLINK libspdk_event_scheduler.so 00:03:59.798 SYMLINK libspdk_event_vmd.so 00:03:59.798 SYMLINK libspdk_event_iobuf.so 00:04:00.057 CC module/event/subsystems/accel/accel.o 00:04:00.316 LIB libspdk_event_accel.a 00:04:00.316 SO libspdk_event_accel.so.6.0 00:04:00.316 SYMLINK libspdk_event_accel.so 00:04:00.575 CC module/event/subsystems/bdev/bdev.o 00:04:00.834 LIB libspdk_event_bdev.a 00:04:00.834 SO libspdk_event_bdev.so.6.0 00:04:00.834 SYMLINK libspdk_event_bdev.so 00:04:01.093 CC module/event/subsystems/scsi/scsi.o 00:04:01.093 CC module/event/subsystems/ublk/ublk.o 00:04:01.093 CC module/event/subsystems/nbd/nbd.o 00:04:01.093 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.093 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.352 LIB libspdk_event_nbd.a 00:04:01.352 LIB libspdk_event_ublk.a 00:04:01.352 LIB libspdk_event_scsi.a 00:04:01.352 SO libspdk_event_ublk.so.3.0 00:04:01.352 SO libspdk_event_nbd.so.6.0 00:04:01.352 SO libspdk_event_scsi.so.6.0 00:04:01.352 SYMLINK libspdk_event_nbd.so 00:04:01.352 SYMLINK libspdk_event_ublk.so 00:04:01.352 SYMLINK libspdk_event_scsi.so 00:04:01.352 LIB libspdk_event_nvmf.a 00:04:01.352 SO libspdk_event_nvmf.so.6.0 00:04:01.611 SYMLINK libspdk_event_nvmf.so 00:04:01.611 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.611 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.870 LIB libspdk_event_vhost_scsi.a 00:04:01.870 SO libspdk_event_vhost_scsi.so.3.0 00:04:01.870 LIB libspdk_event_iscsi.a 00:04:01.870 SO libspdk_event_iscsi.so.6.0 00:04:01.870 SYMLINK libspdk_event_vhost_scsi.so 00:04:01.870 SYMLINK libspdk_event_iscsi.so 00:04:02.127 SO libspdk.so.6.0 00:04:02.128 SYMLINK libspdk.so 00:04:02.386 CC app/trace_record/trace_record.o 00:04:02.386 CC app/spdk_lspci/spdk_lspci.o 00:04:02.386 CXX app/trace/trace.o 00:04:02.386 CC app/spdk_nvme_perf/perf.o 00:04:02.386 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.386 CC app/nvmf_tgt/nvmf_main.o 00:04:02.386 CC app/spdk_tgt/spdk_tgt.o 00:04:02.386 CC test/thread/poller_perf/poller_perf.o 00:04:02.645 CC examples/util/zipf/zipf.o 00:04:02.645 CC test/dma/test_dma/test_dma.o 00:04:02.645 LINK spdk_lspci 00:04:02.645 LINK nvmf_tgt 00:04:02.645 LINK spdk_trace_record 00:04:02.645 LINK poller_perf 00:04:02.645 LINK iscsi_tgt 00:04:02.645 LINK zipf 00:04:02.904 LINK spdk_tgt 00:04:02.904 CC app/spdk_nvme_identify/identify.o 00:04:02.904 LINK spdk_trace 00:04:02.904 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.163 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:03.163 CC examples/ioat/perf/perf.o 00:04:03.163 LINK test_dma 00:04:03.163 CC examples/thread/thread/thread_ex.o 00:04:03.163 CC examples/sock/hello_world/hello_sock.o 00:04:03.163 TEST_HEADER include/spdk/accel.h 00:04:03.163 TEST_HEADER include/spdk/accel_module.h 00:04:03.163 TEST_HEADER include/spdk/assert.h 00:04:03.163 TEST_HEADER include/spdk/barrier.h 00:04:03.163 TEST_HEADER include/spdk/base64.h 00:04:03.163 TEST_HEADER include/spdk/bdev.h 00:04:03.163 TEST_HEADER include/spdk/bdev_module.h 00:04:03.163 TEST_HEADER include/spdk/bdev_zone.h 00:04:03.163 TEST_HEADER include/spdk/bit_array.h 00:04:03.163 TEST_HEADER include/spdk/bit_pool.h 00:04:03.163 TEST_HEADER include/spdk/blob_bdev.h 00:04:03.163 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:03.163 TEST_HEADER include/spdk/blobfs.h 00:04:03.163 TEST_HEADER include/spdk/blob.h 00:04:03.163 TEST_HEADER include/spdk/conf.h 00:04:03.163 TEST_HEADER include/spdk/config.h 00:04:03.163 TEST_HEADER include/spdk/cpuset.h 00:04:03.163 TEST_HEADER include/spdk/crc16.h 00:04:03.163 TEST_HEADER include/spdk/crc32.h 00:04:03.163 LINK spdk_nvme_discover 00:04:03.163 TEST_HEADER include/spdk/crc64.h 00:04:03.163 TEST_HEADER include/spdk/dif.h 00:04:03.163 TEST_HEADER include/spdk/dma.h 00:04:03.163 TEST_HEADER include/spdk/endian.h 00:04:03.163 CC test/app/bdev_svc/bdev_svc.o 00:04:03.163 TEST_HEADER include/spdk/env_dpdk.h 00:04:03.163 TEST_HEADER include/spdk/env.h 00:04:03.163 TEST_HEADER include/spdk/event.h 00:04:03.163 TEST_HEADER include/spdk/fd_group.h 00:04:03.163 TEST_HEADER include/spdk/fd.h 00:04:03.163 TEST_HEADER include/spdk/file.h 00:04:03.163 TEST_HEADER include/spdk/fsdev.h 00:04:03.163 TEST_HEADER include/spdk/fsdev_module.h 00:04:03.163 TEST_HEADER include/spdk/ftl.h 00:04:03.163 TEST_HEADER include/spdk/gpt_spec.h 00:04:03.163 TEST_HEADER include/spdk/hexlify.h 00:04:03.163 TEST_HEADER include/spdk/histogram_data.h 00:04:03.163 LINK interrupt_tgt 00:04:03.163 TEST_HEADER include/spdk/idxd.h 00:04:03.163 TEST_HEADER include/spdk/idxd_spec.h 00:04:03.163 TEST_HEADER include/spdk/init.h 00:04:03.163 TEST_HEADER include/spdk/ioat.h 00:04:03.163 TEST_HEADER include/spdk/ioat_spec.h 00:04:03.163 TEST_HEADER include/spdk/iscsi_spec.h 00:04:03.163 TEST_HEADER include/spdk/json.h 00:04:03.163 TEST_HEADER include/spdk/jsonrpc.h 00:04:03.423 TEST_HEADER include/spdk/keyring.h 00:04:03.423 TEST_HEADER include/spdk/keyring_module.h 00:04:03.423 TEST_HEADER include/spdk/likely.h 00:04:03.423 TEST_HEADER include/spdk/log.h 00:04:03.423 TEST_HEADER include/spdk/lvol.h 00:04:03.423 TEST_HEADER include/spdk/md5.h 00:04:03.423 TEST_HEADER include/spdk/memory.h 00:04:03.423 TEST_HEADER include/spdk/mmio.h 00:04:03.423 TEST_HEADER include/spdk/nbd.h 00:04:03.423 TEST_HEADER include/spdk/net.h 00:04:03.423 TEST_HEADER include/spdk/notify.h 00:04:03.423 TEST_HEADER include/spdk/nvme.h 00:04:03.423 LINK ioat_perf 00:04:03.423 TEST_HEADER include/spdk/nvme_intel.h 00:04:03.423 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:03.423 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvme_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvme_zns.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:03.423 LINK spdk_nvme_perf 00:04:03.423 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvmf.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_transport.h 00:04:03.423 TEST_HEADER include/spdk/opal.h 00:04:03.423 TEST_HEADER include/spdk/opal_spec.h 00:04:03.423 TEST_HEADER include/spdk/pci_ids.h 00:04:03.423 TEST_HEADER include/spdk/pipe.h 00:04:03.423 TEST_HEADER include/spdk/queue.h 00:04:03.423 TEST_HEADER include/spdk/reduce.h 00:04:03.423 TEST_HEADER include/spdk/rpc.h 00:04:03.423 TEST_HEADER include/spdk/scheduler.h 00:04:03.423 TEST_HEADER include/spdk/scsi.h 00:04:03.423 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.423 TEST_HEADER include/spdk/sock.h 00:04:03.423 TEST_HEADER include/spdk/stdinc.h 00:04:03.423 TEST_HEADER include/spdk/string.h 00:04:03.423 TEST_HEADER include/spdk/thread.h 00:04:03.423 TEST_HEADER include/spdk/trace.h 00:04:03.423 TEST_HEADER include/spdk/trace_parser.h 00:04:03.423 TEST_HEADER include/spdk/tree.h 00:04:03.423 TEST_HEADER include/spdk/ublk.h 00:04:03.423 TEST_HEADER include/spdk/util.h 00:04:03.423 TEST_HEADER include/spdk/uuid.h 00:04:03.423 TEST_HEADER include/spdk/version.h 00:04:03.423 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.423 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.423 TEST_HEADER include/spdk/vhost.h 00:04:03.423 LINK hello_sock 00:04:03.423 TEST_HEADER include/spdk/vmd.h 00:04:03.423 TEST_HEADER include/spdk/xor.h 00:04:03.423 TEST_HEADER include/spdk/zipf.h 00:04:03.423 LINK thread 00:04:03.423 LINK bdev_svc 00:04:03.423 CXX test/cpp_headers/accel.o 00:04:03.423 CC test/app/histogram_perf/histogram_perf.o 00:04:03.682 CC test/app/jsoncat/jsoncat.o 00:04:03.682 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.682 CXX test/cpp_headers/accel_module.o 00:04:03.682 CC examples/ioat/verify/verify.o 00:04:03.682 CXX test/cpp_headers/assert.o 00:04:03.682 CC test/app/stub/stub.o 00:04:03.682 LINK histogram_perf 00:04:03.682 LINK spdk_nvme_identify 00:04:03.682 LINK jsoncat 00:04:03.942 CXX test/cpp_headers/barrier.o 00:04:03.942 LINK verify 00:04:03.942 LINK stub 00:04:03.942 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.942 CC test/env/vtophys/vtophys.o 00:04:03.942 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.942 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.942 CC app/spdk_top/spdk_top.o 00:04:03.942 CC test/env/memory/memory_ut.o 00:04:03.942 CXX test/cpp_headers/base64.o 00:04:03.942 LINK lsvmd 00:04:03.942 LINK nvme_fuzz 00:04:03.942 LINK vtophys 00:04:03.942 CC test/env/pci/pci_ut.o 00:04:04.200 LINK env_dpdk_post_init 00:04:04.200 CXX test/cpp_headers/bdev.o 00:04:04.200 CC test/event/event_perf/event_perf.o 00:04:04.200 CC examples/vmd/led/led.o 00:04:04.200 CC test/event/reactor/reactor.o 00:04:04.200 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:04.200 CC test/event/reactor_perf/reactor_perf.o 00:04:04.460 LINK event_perf 00:04:04.460 CXX test/cpp_headers/bdev_module.o 00:04:04.460 LINK reactor 00:04:04.460 LINK led 00:04:04.460 LINK pci_ut 00:04:04.460 LINK reactor_perf 00:04:04.460 LINK mem_callbacks 00:04:04.718 CXX test/cpp_headers/bdev_zone.o 00:04:04.718 CC test/event/app_repeat/app_repeat.o 00:04:04.718 CC test/event/scheduler/scheduler.o 00:04:04.718 CC test/rpc_client/rpc_client_test.o 00:04:04.718 CC examples/idxd/perf/perf.o 00:04:04.718 CXX test/cpp_headers/bit_array.o 00:04:04.718 LINK app_repeat 00:04:04.718 LINK spdk_top 00:04:04.718 CC test/nvme/aer/aer.o 00:04:04.977 CC test/accel/dif/dif.o 00:04:04.977 LINK scheduler 00:04:04.977 LINK rpc_client_test 00:04:04.977 CXX test/cpp_headers/bit_pool.o 00:04:04.977 CC app/vhost/vhost.o 00:04:05.237 LINK aer 00:04:05.237 CXX test/cpp_headers/blob_bdev.o 00:04:05.237 LINK idxd_perf 00:04:05.237 LINK memory_ut 00:04:05.237 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:05.237 CC examples/accel/perf/accel_perf.o 00:04:05.237 LINK vhost 00:04:05.237 CXX test/cpp_headers/blobfs_bdev.o 00:04:05.496 CC examples/blob/hello_world/hello_blob.o 00:04:05.496 CC test/nvme/reset/reset.o 00:04:05.496 CC test/nvme/sgl/sgl.o 00:04:05.496 CC examples/nvme/hello_world/hello_world.o 00:04:05.496 LINK hello_fsdev 00:04:05.496 CXX test/cpp_headers/blobfs.o 00:04:05.496 LINK dif 00:04:05.754 LINK hello_blob 00:04:05.754 CC app/spdk_dd/spdk_dd.o 00:04:05.754 CXX test/cpp_headers/blob.o 00:04:05.754 LINK reset 00:04:05.754 LINK hello_world 00:04:05.754 CXX test/cpp_headers/conf.o 00:04:05.754 LINK sgl 00:04:05.754 LINK accel_perf 00:04:05.754 CXX test/cpp_headers/config.o 00:04:05.754 CXX test/cpp_headers/cpuset.o 00:04:06.011 CC examples/blob/cli/blobcli.o 00:04:06.011 LINK iscsi_fuzz 00:04:06.011 CC examples/nvme/reconnect/reconnect.o 00:04:06.011 CC test/nvme/e2edp/nvme_dp.o 00:04:06.011 CXX test/cpp_headers/crc16.o 00:04:06.011 CC test/nvme/overhead/overhead.o 00:04:06.011 CC test/nvme/err_injection/err_injection.o 00:04:06.011 CC test/nvme/startup/startup.o 00:04:06.011 CC app/fio/nvme/fio_plugin.o 00:04:06.269 LINK spdk_dd 00:04:06.269 CXX test/cpp_headers/crc32.o 00:04:06.269 LINK startup 00:04:06.269 LINK err_injection 00:04:06.269 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:06.269 LINK nvme_dp 00:04:06.269 LINK overhead 00:04:06.269 LINK reconnect 00:04:06.269 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:06.528 CXX test/cpp_headers/crc64.o 00:04:06.528 LINK blobcli 00:04:06.528 CXX test/cpp_headers/dif.o 00:04:06.528 CXX test/cpp_headers/dma.o 00:04:06.528 CXX test/cpp_headers/endian.o 00:04:06.528 CC test/nvme/reserve/reserve.o 00:04:06.528 CC app/fio/bdev/fio_plugin.o 00:04:06.528 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:06.528 LINK spdk_nvme 00:04:06.786 CXX test/cpp_headers/env_dpdk.o 00:04:06.786 CC examples/nvme/arbitration/arbitration.o 00:04:06.786 CXX test/cpp_headers/env.o 00:04:06.786 CC examples/nvme/hotplug/hotplug.o 00:04:06.786 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:06.786 LINK reserve 00:04:06.786 LINK vhost_fuzz 00:04:06.786 CC test/blobfs/mkfs/mkfs.o 00:04:07.051 CXX test/cpp_headers/event.o 00:04:07.051 LINK cmb_copy 00:04:07.051 CC examples/nvme/abort/abort.o 00:04:07.051 LINK hotplug 00:04:07.051 CC test/nvme/simple_copy/simple_copy.o 00:04:07.051 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:07.051 LINK arbitration 00:04:07.051 CXX test/cpp_headers/fd_group.o 00:04:07.051 LINK mkfs 00:04:07.051 LINK spdk_bdev 00:04:07.051 LINK nvme_manage 00:04:07.332 CC test/nvme/connect_stress/connect_stress.o 00:04:07.332 LINK pmr_persistence 00:04:07.332 LINK simple_copy 00:04:07.332 CXX test/cpp_headers/fd.o 00:04:07.332 CXX test/cpp_headers/file.o 00:04:07.332 CXX test/cpp_headers/fsdev.o 00:04:07.332 LINK abort 00:04:07.332 CC test/nvme/boot_partition/boot_partition.o 00:04:07.332 CC examples/bdev/hello_world/hello_bdev.o 00:04:07.332 LINK connect_stress 00:04:07.591 CXX test/cpp_headers/fsdev_module.o 00:04:07.591 CXX test/cpp_headers/ftl.o 00:04:07.591 CC examples/bdev/bdevperf/bdevperf.o 00:04:07.591 CC test/lvol/esnap/esnap.o 00:04:07.591 LINK boot_partition 00:04:07.591 CC test/nvme/fused_ordering/fused_ordering.o 00:04:07.591 CC test/nvme/compliance/nvme_compliance.o 00:04:07.591 LINK hello_bdev 00:04:07.591 CC test/bdev/bdevio/bdevio.o 00:04:07.591 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:07.591 CXX test/cpp_headers/gpt_spec.o 00:04:07.849 LINK fused_ordering 00:04:07.849 CC test/nvme/cuse/cuse.o 00:04:07.849 CC test/nvme/fdp/fdp.o 00:04:07.849 CXX test/cpp_headers/hexlify.o 00:04:07.849 CXX test/cpp_headers/histogram_data.o 00:04:07.849 LINK doorbell_aers 00:04:07.849 LINK nvme_compliance 00:04:07.849 CXX test/cpp_headers/idxd.o 00:04:08.106 CXX test/cpp_headers/idxd_spec.o 00:04:08.106 CXX test/cpp_headers/init.o 00:04:08.106 CXX test/cpp_headers/ioat.o 00:04:08.107 LINK bdevio 00:04:08.107 CXX test/cpp_headers/ioat_spec.o 00:04:08.107 CXX test/cpp_headers/iscsi_spec.o 00:04:08.107 LINK fdp 00:04:08.107 CXX test/cpp_headers/json.o 00:04:08.107 CXX test/cpp_headers/jsonrpc.o 00:04:08.107 CXX test/cpp_headers/keyring.o 00:04:08.107 CXX test/cpp_headers/keyring_module.o 00:04:08.365 CXX test/cpp_headers/likely.o 00:04:08.365 CXX test/cpp_headers/log.o 00:04:08.365 CXX test/cpp_headers/lvol.o 00:04:08.365 LINK bdevperf 00:04:08.365 CXX test/cpp_headers/md5.o 00:04:08.365 CXX test/cpp_headers/memory.o 00:04:08.365 CXX test/cpp_headers/mmio.o 00:04:08.365 CXX test/cpp_headers/nbd.o 00:04:08.365 CXX test/cpp_headers/net.o 00:04:08.365 CXX test/cpp_headers/notify.o 00:04:08.365 CXX test/cpp_headers/nvme.o 00:04:08.622 CXX test/cpp_headers/nvme_intel.o 00:04:08.622 CXX test/cpp_headers/nvme_ocssd.o 00:04:08.622 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:08.622 CXX test/cpp_headers/nvme_spec.o 00:04:08.622 CXX test/cpp_headers/nvme_zns.o 00:04:08.622 CXX test/cpp_headers/nvmf_cmd.o 00:04:08.622 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:08.622 CXX test/cpp_headers/nvmf.o 00:04:08.622 CXX test/cpp_headers/nvmf_spec.o 00:04:08.880 CC examples/nvmf/nvmf/nvmf.o 00:04:08.880 CXX test/cpp_headers/nvmf_transport.o 00:04:08.880 CXX test/cpp_headers/opal.o 00:04:08.880 CXX test/cpp_headers/opal_spec.o 00:04:08.880 CXX test/cpp_headers/pci_ids.o 00:04:08.880 CXX test/cpp_headers/pipe.o 00:04:08.880 CXX test/cpp_headers/queue.o 00:04:08.880 CXX test/cpp_headers/reduce.o 00:04:08.880 CXX test/cpp_headers/rpc.o 00:04:08.880 CXX test/cpp_headers/scheduler.o 00:04:08.880 CXX test/cpp_headers/scsi.o 00:04:08.880 CXX test/cpp_headers/scsi_spec.o 00:04:08.880 CXX test/cpp_headers/sock.o 00:04:09.139 CXX test/cpp_headers/stdinc.o 00:04:09.139 LINK nvmf 00:04:09.139 CXX test/cpp_headers/string.o 00:04:09.139 CXX test/cpp_headers/thread.o 00:04:09.139 LINK cuse 00:04:09.139 CXX test/cpp_headers/trace.o 00:04:09.139 CXX test/cpp_headers/trace_parser.o 00:04:09.139 CXX test/cpp_headers/tree.o 00:04:09.139 CXX test/cpp_headers/ublk.o 00:04:09.139 CXX test/cpp_headers/util.o 00:04:09.139 CXX test/cpp_headers/uuid.o 00:04:09.139 CXX test/cpp_headers/version.o 00:04:09.139 CXX test/cpp_headers/vfio_user_pci.o 00:04:09.398 CXX test/cpp_headers/vfio_user_spec.o 00:04:09.398 CXX test/cpp_headers/vhost.o 00:04:09.398 CXX test/cpp_headers/vmd.o 00:04:09.398 CXX test/cpp_headers/xor.o 00:04:09.398 CXX test/cpp_headers/zipf.o 00:04:12.687 LINK esnap 00:04:12.687 00:04:12.687 real 1m26.051s 00:04:12.687 user 8m4.592s 00:04:12.687 sys 1m32.330s 00:04:12.687 08:38:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:12.687 ************************************ 00:04:12.687 END TEST make 00:04:12.687 ************************************ 00:04:12.687 08:38:20 make -- common/autotest_common.sh@10 -- $ set +x 00:04:12.687 08:38:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:12.687 08:38:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:12.687 08:38:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:12.687 08:38:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.687 08:38:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:12.687 08:38:20 -- pm/common@44 -- $ pid=5301 00:04:12.687 08:38:20 -- pm/common@50 -- $ kill -TERM 5301 00:04:12.687 08:38:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.687 08:38:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:12.687 08:38:20 -- pm/common@44 -- $ pid=5303 00:04:12.687 08:38:20 -- pm/common@50 -- $ kill -TERM 5303 00:04:12.687 08:38:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:12.687 08:38:20 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.687 08:38:20 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.687 08:38:20 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.687 08:38:20 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.687 08:38:20 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.687 08:38:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.687 08:38:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.687 08:38:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.687 08:38:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.687 08:38:20 -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.687 08:38:20 -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.687 08:38:20 -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.687 08:38:20 -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.687 08:38:20 -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.687 08:38:20 -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.687 08:38:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.687 08:38:20 -- scripts/common.sh@344 -- # case "$op" in 00:04:12.687 08:38:20 -- scripts/common.sh@345 -- # : 1 00:04:12.687 08:38:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.687 08:38:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.687 08:38:20 -- scripts/common.sh@365 -- # decimal 1 00:04:12.687 08:38:20 -- scripts/common.sh@353 -- # local d=1 00:04:12.687 08:38:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.687 08:38:20 -- scripts/common.sh@355 -- # echo 1 00:04:12.687 08:38:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.687 08:38:20 -- scripts/common.sh@366 -- # decimal 2 00:04:12.687 08:38:20 -- scripts/common.sh@353 -- # local d=2 00:04:12.687 08:38:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.687 08:38:20 -- scripts/common.sh@355 -- # echo 2 00:04:12.687 08:38:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.687 08:38:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.687 08:38:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.687 08:38:20 -- scripts/common.sh@368 -- # return 0 00:04:12.687 08:38:20 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.687 08:38:20 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.687 08:38:20 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.687 08:38:20 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.687 08:38:20 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.687 08:38:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.687 08:38:20 -- nvmf/common.sh@7 -- # uname -s 00:04:12.687 08:38:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.687 08:38:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.687 08:38:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.687 08:38:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.687 08:38:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.687 08:38:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.687 08:38:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.687 08:38:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.687 08:38:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.687 08:38:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.687 08:38:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:04:12.687 08:38:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:04:12.687 08:38:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.687 08:38:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.687 08:38:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:12.687 08:38:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.687 08:38:20 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.687 08:38:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.687 08:38:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.687 08:38:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.687 08:38:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.687 08:38:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.687 08:38:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.687 08:38:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.687 08:38:20 -- paths/export.sh@5 -- # export PATH 00:04:12.687 08:38:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.687 08:38:20 -- nvmf/common.sh@51 -- # : 0 00:04:12.687 08:38:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.687 08:38:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.687 08:38:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.687 08:38:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.687 08:38:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.687 08:38:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.687 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.687 08:38:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.687 08:38:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.687 08:38:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.687 08:38:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:12.687 08:38:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:12.687 08:38:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:12.687 08:38:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:12.687 08:38:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:12.687 08:38:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:12.687 08:38:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:12.687 08:38:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:12.687 08:38:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:12.687 08:38:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:12.687 08:38:20 -- spdk/autotest.sh@48 -- # udevadm_pid=55571 00:04:12.687 08:38:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:12.687 08:38:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:12.687 08:38:20 -- pm/common@17 -- # local monitor 00:04:12.687 08:38:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.687 08:38:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.687 08:38:20 -- pm/common@25 -- # sleep 1 00:04:12.687 08:38:20 -- pm/common@21 -- # date +%s 00:04:12.687 08:38:20 -- pm/common@21 -- # date +%s 00:04:12.688 08:38:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733906300 00:04:12.688 08:38:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733906300 00:04:12.947 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733906300_collect-vmstat.pm.log 00:04:12.947 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733906300_collect-cpu-load.pm.log 00:04:13.883 08:38:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:13.883 08:38:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:13.883 08:38:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.883 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.883 08:38:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:13.883 08:38:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:13.883 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.883 08:38:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:13.883 08:38:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:13.883 08:38:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:13.883 08:38:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.883 08:38:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:13.883 08:38:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:13.883 08:38:21 -- common/autotest_common.sh@1457 -- # uname 00:04:13.883 08:38:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:13.883 08:38:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:13.883 08:38:21 -- common/autotest_common.sh@1477 -- # uname 00:04:13.883 08:38:21 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:13.883 08:38:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:13.883 08:38:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:13.883 lcov: LCOV version 1.15 00:04:13.883 08:38:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:28.766 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.766 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:43.648 08:38:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:43.648 08:38:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.648 08:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.648 08:38:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:43.648 08:38:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.648 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:43.648 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:43.648 08:38:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:43.648 08:38:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:43.648 08:38:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:43.648 08:38:50 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:43.648 08:38:50 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:43.648 08:38:50 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:43.648 08:38:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:43.648 08:38:50 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:43.648 08:38:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:43.648 08:38:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:43.648 08:38:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:43.648 08:38:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.648 08:38:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.648 08:38:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:43.648 08:38:50 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:43.648 08:38:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:43.648 08:38:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:43.648 08:38:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:43.648 08:38:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.648 08:38:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.648 08:38:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:43.648 08:38:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:43.648 08:38:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:43.649 08:38:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:43.649 08:38:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.649 08:38:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:43.649 08:38:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:43.649 08:38:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:43.649 08:38:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:43.649 08:38:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.649 08:38:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:43.649 08:38:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.649 08:38:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:43.649 08:38:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:43.649 08:38:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:43.649 08:38:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:43.649 No valid GPT data, bailing 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # pt= 00:04:43.649 08:38:50 -- scripts/common.sh@395 -- # return 1 00:04:43.649 08:38:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:43.649 1+0 records in 00:04:43.649 1+0 records out 00:04:43.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439816 s, 238 MB/s 00:04:43.649 08:38:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.649 08:38:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:43.649 08:38:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:43.649 08:38:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:43.649 08:38:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:43.649 No valid GPT data, bailing 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # pt= 00:04:43.649 08:38:50 -- scripts/common.sh@395 -- # return 1 00:04:43.649 08:38:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:43.649 1+0 records in 00:04:43.649 1+0 records out 00:04:43.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0032427 s, 323 MB/s 00:04:43.649 08:38:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.649 08:38:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:43.649 08:38:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:43.649 08:38:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:43.649 08:38:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:43.649 No valid GPT data, bailing 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # pt= 00:04:43.649 08:38:50 -- scripts/common.sh@395 -- # return 1 00:04:43.649 08:38:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:43.649 1+0 records in 00:04:43.649 1+0 records out 00:04:43.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443068 s, 237 MB/s 00:04:43.649 08:38:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.649 08:38:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:43.649 08:38:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:43.649 08:38:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:43.649 08:38:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:43.649 No valid GPT data, bailing 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:43.649 08:38:50 -- scripts/common.sh@394 -- # pt= 00:04:43.649 08:38:50 -- scripts/common.sh@395 -- # return 1 00:04:43.649 08:38:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:43.649 1+0 records in 00:04:43.649 1+0 records out 00:04:43.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00568241 s, 185 MB/s 00:04:43.649 08:38:50 -- spdk/autotest.sh@105 -- # sync 00:04:43.649 08:38:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:43.649 08:38:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:43.649 08:38:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:45.027 08:38:52 -- spdk/autotest.sh@111 -- # uname -s 00:04:45.027 08:38:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:45.027 08:38:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:45.027 08:38:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:45.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.596 Hugepages 00:04:45.596 node hugesize free / total 00:04:45.596 node0 1048576kB 0 / 0 00:04:45.596 node0 2048kB 0 / 0 00:04:45.596 00:04:45.596 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.596 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:45.596 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:45.596 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:45.596 08:38:53 -- spdk/autotest.sh@117 -- # uname -s 00:04:45.596 08:38:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:45.596 08:38:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:45.596 08:38:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.533 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.533 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.533 08:38:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:47.475 08:38:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:47.475 08:38:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:47.475 08:38:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:47.476 08:38:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:47.476 08:38:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:47.476 08:38:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:47.476 08:38:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.476 08:38:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:47.476 08:38:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:47.739 08:38:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:47.740 08:38:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:47.740 08:38:55 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.998 Waiting for block devices as requested 00:04:47.998 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.258 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.258 08:38:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:48.258 08:38:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:48.258 08:38:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:48.258 08:38:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:48.258 08:38:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:48.258 08:38:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1543 -- # continue 00:04:48.258 08:38:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:48.258 08:38:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:48.258 08:38:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:48.258 08:38:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:48.258 08:38:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:48.258 08:38:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:48.258 08:38:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:48.258 08:38:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:48.258 08:38:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:48.258 08:38:55 -- common/autotest_common.sh@1543 -- # continue 00:04:48.258 08:38:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:48.258 08:38:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.258 08:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.258 08:38:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:48.258 08:38:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.258 08:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.258 08:38:55 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.223 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.223 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.223 08:38:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:49.223 08:38:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.223 08:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:49.223 08:38:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:49.223 08:38:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:49.223 08:38:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.223 08:38:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:49.223 08:38:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:49.223 08:38:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:49.223 08:38:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:49.223 08:38:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:49.223 08:38:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:49.223 08:38:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:49.223 08:38:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.223 08:38:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.223 08:38:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:49.223 08:38:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:49.223 08:38:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:49.223 08:38:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:49.223 08:38:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:49.223 08:38:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.223 08:38:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.223 08:38:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:49.483 08:38:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:49.483 08:38:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.483 08:38:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.483 08:38:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:49.483 08:38:57 -- common/autotest_common.sh@1572 -- # return 0 00:04:49.483 08:38:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:49.483 08:38:57 -- common/autotest_common.sh@1580 -- # return 0 00:04:49.483 08:38:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:49.483 08:38:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:49.483 08:38:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.483 08:38:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.483 08:38:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:49.483 08:38:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.483 08:38:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.483 08:38:57 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:49.483 08:38:57 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:49.483 08:38:57 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:49.483 08:38:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.483 08:38:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.483 08:38:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.483 08:38:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.483 ************************************ 00:04:49.483 START TEST env 00:04:49.483 ************************************ 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.483 * Looking for test storage... 00:04:49.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.483 08:38:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.483 08:38:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.483 08:38:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.483 08:38:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.483 08:38:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.483 08:38:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.483 08:38:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.483 08:38:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.483 08:38:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.483 08:38:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.483 08:38:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.483 08:38:57 env -- scripts/common.sh@344 -- # case "$op" in 00:04:49.483 08:38:57 env -- scripts/common.sh@345 -- # : 1 00:04:49.483 08:38:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.483 08:38:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.483 08:38:57 env -- scripts/common.sh@365 -- # decimal 1 00:04:49.483 08:38:57 env -- scripts/common.sh@353 -- # local d=1 00:04:49.483 08:38:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.483 08:38:57 env -- scripts/common.sh@355 -- # echo 1 00:04:49.483 08:38:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.483 08:38:57 env -- scripts/common.sh@366 -- # decimal 2 00:04:49.483 08:38:57 env -- scripts/common.sh@353 -- # local d=2 00:04:49.483 08:38:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.483 08:38:57 env -- scripts/common.sh@355 -- # echo 2 00:04:49.483 08:38:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.483 08:38:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.483 08:38:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.483 08:38:57 env -- scripts/common.sh@368 -- # return 0 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.483 --rc genhtml_branch_coverage=1 00:04:49.483 --rc genhtml_function_coverage=1 00:04:49.483 --rc genhtml_legend=1 00:04:49.483 --rc geninfo_all_blocks=1 00:04:49.483 --rc geninfo_unexecuted_blocks=1 00:04:49.483 00:04:49.483 ' 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.483 --rc genhtml_branch_coverage=1 00:04:49.483 --rc genhtml_function_coverage=1 00:04:49.483 --rc genhtml_legend=1 00:04:49.483 --rc geninfo_all_blocks=1 00:04:49.483 --rc geninfo_unexecuted_blocks=1 00:04:49.483 00:04:49.483 ' 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.483 --rc genhtml_branch_coverage=1 00:04:49.483 --rc genhtml_function_coverage=1 00:04:49.483 --rc genhtml_legend=1 00:04:49.483 --rc geninfo_all_blocks=1 00:04:49.483 --rc geninfo_unexecuted_blocks=1 00:04:49.483 00:04:49.483 ' 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.483 --rc genhtml_branch_coverage=1 00:04:49.483 --rc genhtml_function_coverage=1 00:04:49.483 --rc genhtml_legend=1 00:04:49.483 --rc geninfo_all_blocks=1 00:04:49.483 --rc geninfo_unexecuted_blocks=1 00:04:49.483 00:04:49.483 ' 00:04:49.483 08:38:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.483 08:38:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.483 08:38:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.483 ************************************ 00:04:49.483 START TEST env_memory 00:04:49.483 ************************************ 00:04:49.483 08:38:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.483 00:04:49.483 00:04:49.483 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.483 http://cunit.sourceforge.net/ 00:04:49.483 00:04:49.483 00:04:49.483 Suite: memory 00:04:49.742 Test: alloc and free memory map ...[2024-12-11 08:38:57.266422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.742 passed 00:04:49.742 Test: mem map translation ...[2024-12-11 08:38:57.297009] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.742 [2024-12-11 08:38:57.297049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.742 [2024-12-11 08:38:57.297104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.742 [2024-12-11 08:38:57.297114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.742 passed 00:04:49.742 Test: mem map registration ...[2024-12-11 08:38:57.361286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:49.742 [2024-12-11 08:38:57.361329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:49.742 passed 00:04:49.742 Test: mem map adjacent registrations ...passed 00:04:49.742 00:04:49.742 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.742 suites 1 1 n/a 0 0 00:04:49.742 tests 4 4 4 0 0 00:04:49.742 asserts 152 152 152 0 n/a 00:04:49.742 00:04:49.742 Elapsed time = 0.212 seconds 00:04:49.742 00:04:49.742 real 0m0.230s 00:04:49.742 user 0m0.213s 00:04:49.742 sys 0m0.013s 00:04:49.742 08:38:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.742 08:38:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 ************************************ 00:04:49.742 END TEST env_memory 00:04:49.742 ************************************ 00:04:49.742 08:38:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.742 08:38:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.742 08:38:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.742 08:38:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 ************************************ 00:04:49.742 START TEST env_vtophys 00:04:49.742 ************************************ 00:04:49.742 08:38:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.002 EAL: lib.eal log level changed from notice to debug 00:04:50.002 EAL: Detected lcore 0 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 1 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 2 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 3 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 4 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 5 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 6 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 7 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 8 as core 0 on socket 0 00:04:50.002 EAL: Detected lcore 9 as core 0 on socket 0 00:04:50.002 EAL: Maximum logical cores by configuration: 128 00:04:50.002 EAL: Detected CPU lcores: 10 00:04:50.002 EAL: Detected NUMA nodes: 1 00:04:50.002 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:50.002 EAL: Detected shared linkage of DPDK 00:04:50.002 EAL: No shared files mode enabled, IPC will be disabled 00:04:50.002 EAL: Selected IOVA mode 'PA' 00:04:50.002 EAL: Probing VFIO support... 00:04:50.002 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.002 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:50.002 EAL: Ask a virtual area of 0x2e000 bytes 00:04:50.002 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:50.002 EAL: Setting up physically contiguous memory... 00:04:50.002 EAL: Setting maximum number of open files to 524288 00:04:50.002 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:50.002 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:50.002 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.002 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:50.002 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.002 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.002 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:50.002 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:50.002 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.002 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:50.002 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.002 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.002 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:50.002 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:50.002 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.002 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:50.002 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.002 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.002 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:50.002 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:50.002 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.002 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:50.002 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.002 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.002 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:50.002 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:50.002 EAL: Hugepages will be freed exactly as allocated. 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: TSC frequency is ~2200000 KHz 00:04:50.002 EAL: Main lcore 0 is ready (tid=7f4bcee44a00;cpuset=[0]) 00:04:50.002 EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 0 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 2MB 00:04:50.002 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.002 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:50.002 EAL: Mem event callback 'spdk:(nil)' registered 00:04:50.002 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:50.002 00:04:50.002 00:04:50.002 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.002 http://cunit.sourceforge.net/ 00:04:50.002 00:04:50.002 00:04:50.002 Suite: components_suite 00:04:50.002 Test: vtophys_malloc_test ...passed 00:04:50.002 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 4 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 4MB 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was shrunk by 4MB 00:04:50.002 EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 4 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 6MB 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was shrunk by 6MB 00:04:50.002 EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 4 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 10MB 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was shrunk by 10MB 00:04:50.002 EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 4 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 18MB 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was shrunk by 18MB 00:04:50.002 EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 4 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 34MB 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was shrunk by 34MB 00:04:50.002 EAL: Trying to obtain current memory policy. 00:04:50.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.002 EAL: Restoring previous memory policy: 4 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.002 EAL: Heap on socket 0 was expanded by 66MB 00:04:50.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.002 EAL: request: mp_malloc_sync 00:04:50.002 EAL: No shared files mode enabled, IPC is disabled 00:04:50.003 EAL: Heap on socket 0 was shrunk by 66MB 00:04:50.003 EAL: Trying to obtain current memory policy. 00:04:50.003 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.003 EAL: Restoring previous memory policy: 4 00:04:50.003 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.003 EAL: request: mp_malloc_sync 00:04:50.003 EAL: No shared files mode enabled, IPC is disabled 00:04:50.003 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.003 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.262 EAL: request: mp_malloc_sync 00:04:50.262 EAL: No shared files mode enabled, IPC is disabled 00:04:50.262 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.262 EAL: Trying to obtain current memory policy. 00:04:50.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.262 EAL: Restoring previous memory policy: 4 00:04:50.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.262 EAL: request: mp_malloc_sync 00:04:50.262 EAL: No shared files mode enabled, IPC is disabled 00:04:50.262 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.262 EAL: request: mp_malloc_sync 00:04:50.262 EAL: No shared files mode enabled, IPC is disabled 00:04:50.262 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.262 EAL: Trying to obtain current memory policy. 00:04:50.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.262 EAL: Restoring previous memory policy: 4 00:04:50.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.262 EAL: request: mp_malloc_sync 00:04:50.262 EAL: No shared files mode enabled, IPC is disabled 00:04:50.262 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.262 EAL: request: mp_malloc_sync 00:04:50.262 EAL: No shared files mode enabled, IPC is disabled 00:04:50.262 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.262 EAL: Trying to obtain current memory policy. 00:04:50.262 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.521 EAL: Restoring previous memory policy: 4 00:04:50.521 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.521 EAL: request: mp_malloc_sync 00:04:50.521 EAL: No shared files mode enabled, IPC is disabled 00:04:50.521 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.521 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.781 passed 00:04:50.781 00:04:50.781 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.781 suites 1 1 n/a 0 0 00:04:50.781 tests 2 2 2 0 0 00:04:50.781 asserts 5442 5442 5442 0 n/a 00:04:50.781 00:04:50.781 Elapsed time = 0.644 seconds 00:04:50.781 EAL: request: mp_malloc_sync 00:04:50.781 EAL: No shared files mode enabled, IPC is disabled 00:04:50.781 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.781 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.781 EAL: request: mp_malloc_sync 00:04:50.781 EAL: No shared files mode enabled, IPC is disabled 00:04:50.781 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.781 EAL: No shared files mode enabled, IPC is disabled 00:04:50.781 EAL: No shared files mode enabled, IPC is disabled 00:04:50.781 EAL: No shared files mode enabled, IPC is disabled 00:04:50.781 00:04:50.781 real 0m0.848s 00:04:50.781 user 0m0.429s 00:04:50.781 sys 0m0.293s 00:04:50.781 08:38:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.781 08:38:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:50.781 ************************************ 00:04:50.781 END TEST env_vtophys 00:04:50.781 ************************************ 00:04:50.781 08:38:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:50.781 08:38:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.781 08:38:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.781 08:38:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.781 ************************************ 00:04:50.781 START TEST env_pci 00:04:50.781 ************************************ 00:04:50.781 08:38:58 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:50.781 00:04:50.781 00:04:50.781 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.781 http://cunit.sourceforge.net/ 00:04:50.781 00:04:50.781 00:04:50.781 Suite: pci 00:04:50.781 Test: pci_hook ...[2024-12-11 08:38:58.413890] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57767 has claimed it 00:04:50.781 passed 00:04:50.781 00:04:50.781 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.781 suites 1 1 n/a 0 0 00:04:50.781 tests 1 1 1 0 0 00:04:50.781 asserts 25 25 25 0 n/a 00:04:50.781 00:04:50.781 Elapsed time = 0.002 seconds 00:04:50.781 EAL: Cannot find device (10000:00:01.0) 00:04:50.781 EAL: Failed to attach device on primary process 00:04:50.781 00:04:50.781 real 0m0.020s 00:04:50.781 user 0m0.008s 00:04:50.781 sys 0m0.011s 00:04:50.781 08:38:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.781 08:38:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:50.781 ************************************ 00:04:50.781 END TEST env_pci 00:04:50.781 ************************************ 00:04:50.781 08:38:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.781 08:38:58 env -- env/env.sh@15 -- # uname 00:04:50.781 08:38:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.781 08:38:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.781 08:38:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.781 08:38:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:50.781 08:38:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.781 08:38:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.781 ************************************ 00:04:50.781 START TEST env_dpdk_post_init 00:04:50.781 ************************************ 00:04:50.781 08:38:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.781 EAL: Detected CPU lcores: 10 00:04:50.781 EAL: Detected NUMA nodes: 1 00:04:50.781 EAL: Detected shared linkage of DPDK 00:04:50.781 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.781 EAL: Selected IOVA mode 'PA' 00:04:51.041 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.041 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:51.041 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:51.041 Starting DPDK initialization... 00:04:51.041 Starting SPDK post initialization... 00:04:51.041 SPDK NVMe probe 00:04:51.041 Attaching to 0000:00:10.0 00:04:51.041 Attaching to 0000:00:11.0 00:04:51.041 Attached to 0000:00:10.0 00:04:51.041 Attached to 0000:00:11.0 00:04:51.041 Cleaning up... 00:04:51.041 00:04:51.041 real 0m0.194s 00:04:51.041 user 0m0.060s 00:04:51.041 sys 0m0.034s 00:04:51.041 08:38:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.041 08:38:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.041 ************************************ 00:04:51.041 END TEST env_dpdk_post_init 00:04:51.041 ************************************ 00:04:51.041 08:38:58 env -- env/env.sh@26 -- # uname 00:04:51.041 08:38:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.041 08:38:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.041 08:38:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.041 08:38:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.041 08:38:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.041 ************************************ 00:04:51.041 START TEST env_mem_callbacks 00:04:51.041 ************************************ 00:04:51.041 08:38:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.041 EAL: Detected CPU lcores: 10 00:04:51.041 EAL: Detected NUMA nodes: 1 00:04:51.041 EAL: Detected shared linkage of DPDK 00:04:51.041 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.041 EAL: Selected IOVA mode 'PA' 00:04:51.300 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.300 00:04:51.300 00:04:51.300 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.300 http://cunit.sourceforge.net/ 00:04:51.300 00:04:51.300 00:04:51.300 Suite: memory 00:04:51.300 Test: test ... 00:04:51.300 register 0x200000200000 2097152 00:04:51.300 malloc 3145728 00:04:51.300 register 0x200000400000 4194304 00:04:51.300 buf 0x200000500000 len 3145728 PASSED 00:04:51.300 malloc 64 00:04:51.300 buf 0x2000004fff40 len 64 PASSED 00:04:51.300 malloc 4194304 00:04:51.300 register 0x200000800000 6291456 00:04:51.300 buf 0x200000a00000 len 4194304 PASSED 00:04:51.300 free 0x200000500000 3145728 00:04:51.300 free 0x2000004fff40 64 00:04:51.300 unregister 0x200000400000 4194304 PASSED 00:04:51.300 free 0x200000a00000 4194304 00:04:51.300 unregister 0x200000800000 6291456 PASSED 00:04:51.300 malloc 8388608 00:04:51.300 register 0x200000400000 10485760 00:04:51.300 buf 0x200000600000 len 8388608 PASSED 00:04:51.300 free 0x200000600000 8388608 00:04:51.300 unregister 0x200000400000 10485760 PASSED 00:04:51.300 passed 00:04:51.300 00:04:51.300 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.300 suites 1 1 n/a 0 0 00:04:51.300 tests 1 1 1 0 0 00:04:51.300 asserts 15 15 15 0 n/a 00:04:51.300 00:04:51.300 Elapsed time = 0.009 seconds 00:04:51.300 00:04:51.300 real 0m0.142s 00:04:51.300 user 0m0.019s 00:04:51.300 sys 0m0.022s 00:04:51.300 08:38:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.300 08:38:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.300 ************************************ 00:04:51.301 END TEST env_mem_callbacks 00:04:51.301 ************************************ 00:04:51.301 00:04:51.301 real 0m1.895s 00:04:51.301 user 0m0.917s 00:04:51.301 sys 0m0.624s 00:04:51.301 08:38:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.301 ************************************ 00:04:51.301 END TEST env 00:04:51.301 ************************************ 00:04:51.301 08:38:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.301 08:38:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.301 08:38:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.301 08:38:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.301 08:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:51.301 ************************************ 00:04:51.301 START TEST rpc 00:04:51.301 ************************************ 00:04:51.301 08:38:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.301 * Looking for test storage... 00:04:51.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:51.301 08:38:59 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.301 08:38:59 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.301 08:38:59 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.560 08:38:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.560 08:38:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.560 08:38:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.560 08:38:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.560 08:38:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.560 08:38:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.560 08:38:59 rpc -- scripts/common.sh@345 -- # : 1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.560 08:38:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.560 08:38:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.560 08:38:59 rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.560 08:38:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.560 08:38:59 rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.560 08:38:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.560 08:38:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.560 08:38:59 rpc -- scripts/common.sh@368 -- # return 0 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.560 --rc genhtml_branch_coverage=1 00:04:51.560 --rc genhtml_function_coverage=1 00:04:51.560 --rc genhtml_legend=1 00:04:51.560 --rc geninfo_all_blocks=1 00:04:51.560 --rc geninfo_unexecuted_blocks=1 00:04:51.560 00:04:51.560 ' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.560 --rc genhtml_branch_coverage=1 00:04:51.560 --rc genhtml_function_coverage=1 00:04:51.560 --rc genhtml_legend=1 00:04:51.560 --rc geninfo_all_blocks=1 00:04:51.560 --rc geninfo_unexecuted_blocks=1 00:04:51.560 00:04:51.560 ' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.560 --rc genhtml_branch_coverage=1 00:04:51.560 --rc genhtml_function_coverage=1 00:04:51.560 --rc genhtml_legend=1 00:04:51.560 --rc geninfo_all_blocks=1 00:04:51.560 --rc geninfo_unexecuted_blocks=1 00:04:51.560 00:04:51.560 ' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.560 --rc genhtml_branch_coverage=1 00:04:51.560 --rc genhtml_function_coverage=1 00:04:51.560 --rc genhtml_legend=1 00:04:51.560 --rc geninfo_all_blocks=1 00:04:51.560 --rc geninfo_unexecuted_blocks=1 00:04:51.560 00:04:51.560 ' 00:04:51.560 08:38:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57884 00:04:51.560 08:38:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.560 08:38:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:51.560 08:38:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57884 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 57884 ']' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.560 08:38:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.560 [2024-12-11 08:38:59.232226] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:04:51.561 [2024-12-11 08:38:59.232331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57884 ] 00:04:51.820 [2024-12-11 08:38:59.381868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.820 [2024-12-11 08:38:59.422327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.820 [2024-12-11 08:38:59.422391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57884' to capture a snapshot of events at runtime. 00:04:51.820 [2024-12-11 08:38:59.422405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.820 [2024-12-11 08:38:59.422416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.820 [2024-12-11 08:38:59.422425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57884 for offline analysis/debug. 00:04:51.820 [2024-12-11 08:38:59.422827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.820 [2024-12-11 08:38:59.470763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.079 08:38:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.079 08:38:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.079 08:38:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.079 08:38:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.079 08:38:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.079 08:38:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.079 08:38:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.079 08:38:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.079 08:38:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 ************************************ 00:04:52.079 START TEST rpc_integrity 00:04:52.079 ************************************ 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.079 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.079 { 00:04:52.079 "name": "Malloc0", 00:04:52.079 "aliases": [ 00:04:52.079 "ead93f0a-427f-4a67-aaad-ce5f1fbe1ddb" 00:04:52.079 ], 00:04:52.079 "product_name": "Malloc disk", 00:04:52.079 "block_size": 512, 00:04:52.079 "num_blocks": 16384, 00:04:52.079 "uuid": "ead93f0a-427f-4a67-aaad-ce5f1fbe1ddb", 00:04:52.079 "assigned_rate_limits": { 00:04:52.079 "rw_ios_per_sec": 0, 00:04:52.079 "rw_mbytes_per_sec": 0, 00:04:52.079 "r_mbytes_per_sec": 0, 00:04:52.079 "w_mbytes_per_sec": 0 00:04:52.079 }, 00:04:52.079 "claimed": false, 00:04:52.079 "zoned": false, 00:04:52.079 "supported_io_types": { 00:04:52.079 "read": true, 00:04:52.079 "write": true, 00:04:52.079 "unmap": true, 00:04:52.079 "flush": true, 00:04:52.079 "reset": true, 00:04:52.079 "nvme_admin": false, 00:04:52.079 "nvme_io": false, 00:04:52.079 "nvme_io_md": false, 00:04:52.079 "write_zeroes": true, 00:04:52.079 "zcopy": true, 00:04:52.080 "get_zone_info": false, 00:04:52.080 "zone_management": false, 00:04:52.080 "zone_append": false, 00:04:52.080 "compare": false, 00:04:52.080 "compare_and_write": false, 00:04:52.080 "abort": true, 00:04:52.080 "seek_hole": false, 00:04:52.080 "seek_data": false, 00:04:52.080 "copy": true, 00:04:52.080 "nvme_iov_md": false 00:04:52.080 }, 00:04:52.080 "memory_domains": [ 00:04:52.080 { 00:04:52.080 "dma_device_id": "system", 00:04:52.080 "dma_device_type": 1 00:04:52.080 }, 00:04:52.080 { 00:04:52.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.080 "dma_device_type": 2 00:04:52.080 } 00:04:52.080 ], 00:04:52.080 "driver_specific": {} 00:04:52.080 } 00:04:52.080 ]' 00:04:52.080 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.080 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.080 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.080 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.080 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.080 [2024-12-11 08:38:59.783406] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.080 [2024-12-11 08:38:59.783482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.080 [2024-12-11 08:38:59.783516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x95eb90 00:04:52.080 [2024-12-11 08:38:59.783539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.080 [2024-12-11 08:38:59.784959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.080 [2024-12-11 08:38:59.785006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.080 Passthru0 00:04:52.080 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.080 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.080 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.080 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.080 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.080 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.080 { 00:04:52.080 "name": "Malloc0", 00:04:52.080 "aliases": [ 00:04:52.080 "ead93f0a-427f-4a67-aaad-ce5f1fbe1ddb" 00:04:52.080 ], 00:04:52.080 "product_name": "Malloc disk", 00:04:52.080 "block_size": 512, 00:04:52.080 "num_blocks": 16384, 00:04:52.080 "uuid": "ead93f0a-427f-4a67-aaad-ce5f1fbe1ddb", 00:04:52.080 "assigned_rate_limits": { 00:04:52.080 "rw_ios_per_sec": 0, 00:04:52.080 "rw_mbytes_per_sec": 0, 00:04:52.080 "r_mbytes_per_sec": 0, 00:04:52.080 "w_mbytes_per_sec": 0 00:04:52.080 }, 00:04:52.080 "claimed": true, 00:04:52.080 "claim_type": "exclusive_write", 00:04:52.080 "zoned": false, 00:04:52.080 "supported_io_types": { 00:04:52.080 "read": true, 00:04:52.080 "write": true, 00:04:52.080 "unmap": true, 00:04:52.080 "flush": true, 00:04:52.080 "reset": true, 00:04:52.080 "nvme_admin": false, 00:04:52.080 "nvme_io": false, 00:04:52.080 "nvme_io_md": false, 00:04:52.080 "write_zeroes": true, 00:04:52.080 "zcopy": true, 00:04:52.080 "get_zone_info": false, 00:04:52.080 "zone_management": false, 00:04:52.080 "zone_append": false, 00:04:52.080 "compare": false, 00:04:52.080 "compare_and_write": false, 00:04:52.080 "abort": true, 00:04:52.080 "seek_hole": false, 00:04:52.080 "seek_data": false, 00:04:52.080 "copy": true, 00:04:52.080 "nvme_iov_md": false 00:04:52.080 }, 00:04:52.080 "memory_domains": [ 00:04:52.080 { 00:04:52.080 "dma_device_id": "system", 00:04:52.080 "dma_device_type": 1 00:04:52.080 }, 00:04:52.080 { 00:04:52.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.080 "dma_device_type": 2 00:04:52.080 } 00:04:52.080 ], 00:04:52.080 "driver_specific": {} 00:04:52.080 }, 00:04:52.080 { 00:04:52.080 "name": "Passthru0", 00:04:52.080 "aliases": [ 00:04:52.080 "dda53a68-2fd3-5536-a38e-682a8720078a" 00:04:52.080 ], 00:04:52.080 "product_name": "passthru", 00:04:52.080 "block_size": 512, 00:04:52.080 "num_blocks": 16384, 00:04:52.080 "uuid": "dda53a68-2fd3-5536-a38e-682a8720078a", 00:04:52.080 "assigned_rate_limits": { 00:04:52.080 "rw_ios_per_sec": 0, 00:04:52.080 "rw_mbytes_per_sec": 0, 00:04:52.080 "r_mbytes_per_sec": 0, 00:04:52.080 "w_mbytes_per_sec": 0 00:04:52.080 }, 00:04:52.080 "claimed": false, 00:04:52.080 "zoned": false, 00:04:52.080 "supported_io_types": { 00:04:52.080 "read": true, 00:04:52.080 "write": true, 00:04:52.080 "unmap": true, 00:04:52.080 "flush": true, 00:04:52.080 "reset": true, 00:04:52.080 "nvme_admin": false, 00:04:52.080 "nvme_io": false, 00:04:52.080 "nvme_io_md": false, 00:04:52.080 "write_zeroes": true, 00:04:52.080 "zcopy": true, 00:04:52.080 "get_zone_info": false, 00:04:52.080 "zone_management": false, 00:04:52.080 "zone_append": false, 00:04:52.080 "compare": false, 00:04:52.080 "compare_and_write": false, 00:04:52.080 "abort": true, 00:04:52.080 "seek_hole": false, 00:04:52.080 "seek_data": false, 00:04:52.080 "copy": true, 00:04:52.080 "nvme_iov_md": false 00:04:52.080 }, 00:04:52.080 "memory_domains": [ 00:04:52.080 { 00:04:52.080 "dma_device_id": "system", 00:04:52.080 "dma_device_type": 1 00:04:52.080 }, 00:04:52.080 { 00:04:52.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.080 "dma_device_type": 2 00:04:52.080 } 00:04:52.080 ], 00:04:52.080 "driver_specific": { 00:04:52.080 "passthru": { 00:04:52.080 "name": "Passthru0", 00:04:52.080 "base_bdev_name": "Malloc0" 00:04:52.080 } 00:04:52.080 } 00:04:52.080 } 00:04:52.080 ]' 00:04:52.080 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.340 08:38:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.340 00:04:52.340 real 0m0.324s 00:04:52.340 user 0m0.214s 00:04:52.340 sys 0m0.039s 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.340 ************************************ 00:04:52.340 08:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 END TEST rpc_integrity 00:04:52.340 ************************************ 00:04:52.340 08:38:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.340 08:38:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.340 08:38:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.340 08:38:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 ************************************ 00:04:52.340 START TEST rpc_plugins 00:04:52.340 ************************************ 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.340 { 00:04:52.340 "name": "Malloc1", 00:04:52.340 "aliases": [ 00:04:52.340 "0eeeec1f-bdbb-42ca-ade9-f9be52e8eead" 00:04:52.340 ], 00:04:52.340 "product_name": "Malloc disk", 00:04:52.340 "block_size": 4096, 00:04:52.340 "num_blocks": 256, 00:04:52.340 "uuid": "0eeeec1f-bdbb-42ca-ade9-f9be52e8eead", 00:04:52.340 "assigned_rate_limits": { 00:04:52.340 "rw_ios_per_sec": 0, 00:04:52.340 "rw_mbytes_per_sec": 0, 00:04:52.340 "r_mbytes_per_sec": 0, 00:04:52.340 "w_mbytes_per_sec": 0 00:04:52.340 }, 00:04:52.340 "claimed": false, 00:04:52.340 "zoned": false, 00:04:52.340 "supported_io_types": { 00:04:52.340 "read": true, 00:04:52.340 "write": true, 00:04:52.340 "unmap": true, 00:04:52.340 "flush": true, 00:04:52.340 "reset": true, 00:04:52.340 "nvme_admin": false, 00:04:52.340 "nvme_io": false, 00:04:52.340 "nvme_io_md": false, 00:04:52.340 "write_zeroes": true, 00:04:52.340 "zcopy": true, 00:04:52.340 "get_zone_info": false, 00:04:52.340 "zone_management": false, 00:04:52.340 "zone_append": false, 00:04:52.340 "compare": false, 00:04:52.340 "compare_and_write": false, 00:04:52.340 "abort": true, 00:04:52.340 "seek_hole": false, 00:04:52.340 "seek_data": false, 00:04:52.340 "copy": true, 00:04:52.340 "nvme_iov_md": false 00:04:52.340 }, 00:04:52.340 "memory_domains": [ 00:04:52.340 { 00:04:52.340 "dma_device_id": "system", 00:04:52.340 "dma_device_type": 1 00:04:52.340 }, 00:04:52.340 { 00:04:52.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.340 "dma_device_type": 2 00:04:52.340 } 00:04:52.340 ], 00:04:52.340 "driver_specific": {} 00:04:52.340 } 00:04:52.340 ]' 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.340 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.340 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.600 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.600 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.600 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.600 08:39:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.600 00:04:52.600 real 0m0.156s 00:04:52.600 user 0m0.107s 00:04:52.600 sys 0m0.012s 00:04:52.600 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.600 08:39:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.600 ************************************ 00:04:52.600 END TEST rpc_plugins 00:04:52.600 ************************************ 00:04:52.600 08:39:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:52.600 08:39:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.600 08:39:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.600 08:39:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.600 ************************************ 00:04:52.600 START TEST rpc_trace_cmd_test 00:04:52.600 ************************************ 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:52.600 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57884", 00:04:52.600 "tpoint_group_mask": "0x8", 00:04:52.600 "iscsi_conn": { 00:04:52.600 "mask": "0x2", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "scsi": { 00:04:52.600 "mask": "0x4", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "bdev": { 00:04:52.600 "mask": "0x8", 00:04:52.600 "tpoint_mask": "0xffffffffffffffff" 00:04:52.600 }, 00:04:52.600 "nvmf_rdma": { 00:04:52.600 "mask": "0x10", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "nvmf_tcp": { 00:04:52.600 "mask": "0x20", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "ftl": { 00:04:52.600 "mask": "0x40", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "blobfs": { 00:04:52.600 "mask": "0x80", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "dsa": { 00:04:52.600 "mask": "0x200", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "thread": { 00:04:52.600 "mask": "0x400", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "nvme_pcie": { 00:04:52.600 "mask": "0x800", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "iaa": { 00:04:52.600 "mask": "0x1000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "nvme_tcp": { 00:04:52.600 "mask": "0x2000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "bdev_nvme": { 00:04:52.600 "mask": "0x4000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "sock": { 00:04:52.600 "mask": "0x8000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "blob": { 00:04:52.600 "mask": "0x10000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "bdev_raid": { 00:04:52.600 "mask": "0x20000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 }, 00:04:52.600 "scheduler": { 00:04:52.600 "mask": "0x40000", 00:04:52.600 "tpoint_mask": "0x0" 00:04:52.600 } 00:04:52.600 }' 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:52.600 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.859 00:04:52.859 real 0m0.280s 00:04:52.859 user 0m0.244s 00:04:52.859 sys 0m0.023s 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.859 08:39:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.859 ************************************ 00:04:52.859 END TEST rpc_trace_cmd_test 00:04:52.859 ************************************ 00:04:52.859 08:39:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.859 08:39:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.859 08:39:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.859 08:39:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.859 08:39:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.860 08:39:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.860 ************************************ 00:04:52.860 START TEST rpc_daemon_integrity 00:04:52.860 ************************************ 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.860 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.119 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.119 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.119 { 00:04:53.119 "name": "Malloc2", 00:04:53.119 "aliases": [ 00:04:53.119 "71a20bfa-6417-43af-aea6-94b2c71d8de9" 00:04:53.119 ], 00:04:53.119 "product_name": "Malloc disk", 00:04:53.119 "block_size": 512, 00:04:53.119 "num_blocks": 16384, 00:04:53.119 "uuid": "71a20bfa-6417-43af-aea6-94b2c71d8de9", 00:04:53.119 "assigned_rate_limits": { 00:04:53.119 "rw_ios_per_sec": 0, 00:04:53.119 "rw_mbytes_per_sec": 0, 00:04:53.119 "r_mbytes_per_sec": 0, 00:04:53.119 "w_mbytes_per_sec": 0 00:04:53.119 }, 00:04:53.119 "claimed": false, 00:04:53.119 "zoned": false, 00:04:53.119 "supported_io_types": { 00:04:53.119 "read": true, 00:04:53.119 "write": true, 00:04:53.119 "unmap": true, 00:04:53.119 "flush": true, 00:04:53.119 "reset": true, 00:04:53.119 "nvme_admin": false, 00:04:53.119 "nvme_io": false, 00:04:53.119 "nvme_io_md": false, 00:04:53.119 "write_zeroes": true, 00:04:53.119 "zcopy": true, 00:04:53.119 "get_zone_info": false, 00:04:53.119 "zone_management": false, 00:04:53.119 "zone_append": false, 00:04:53.119 "compare": false, 00:04:53.119 "compare_and_write": false, 00:04:53.119 "abort": true, 00:04:53.119 "seek_hole": false, 00:04:53.119 "seek_data": false, 00:04:53.119 "copy": true, 00:04:53.119 "nvme_iov_md": false 00:04:53.119 }, 00:04:53.119 "memory_domains": [ 00:04:53.119 { 00:04:53.119 "dma_device_id": "system", 00:04:53.119 "dma_device_type": 1 00:04:53.119 }, 00:04:53.119 { 00:04:53.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.119 "dma_device_type": 2 00:04:53.120 } 00:04:53.120 ], 00:04:53.120 "driver_specific": {} 00:04:53.120 } 00:04:53.120 ]' 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.120 [2024-12-11 08:39:00.691770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:53.120 [2024-12-11 08:39:00.691831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.120 [2024-12-11 08:39:00.691848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9c4440 00:04:53.120 [2024-12-11 08:39:00.691858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.120 [2024-12-11 08:39:00.693653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.120 [2024-12-11 08:39:00.693703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.120 Passthru0 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.120 { 00:04:53.120 "name": "Malloc2", 00:04:53.120 "aliases": [ 00:04:53.120 "71a20bfa-6417-43af-aea6-94b2c71d8de9" 00:04:53.120 ], 00:04:53.120 "product_name": "Malloc disk", 00:04:53.120 "block_size": 512, 00:04:53.120 "num_blocks": 16384, 00:04:53.120 "uuid": "71a20bfa-6417-43af-aea6-94b2c71d8de9", 00:04:53.120 "assigned_rate_limits": { 00:04:53.120 "rw_ios_per_sec": 0, 00:04:53.120 "rw_mbytes_per_sec": 0, 00:04:53.120 "r_mbytes_per_sec": 0, 00:04:53.120 "w_mbytes_per_sec": 0 00:04:53.120 }, 00:04:53.120 "claimed": true, 00:04:53.120 "claim_type": "exclusive_write", 00:04:53.120 "zoned": false, 00:04:53.120 "supported_io_types": { 00:04:53.120 "read": true, 00:04:53.120 "write": true, 00:04:53.120 "unmap": true, 00:04:53.120 "flush": true, 00:04:53.120 "reset": true, 00:04:53.120 "nvme_admin": false, 00:04:53.120 "nvme_io": false, 00:04:53.120 "nvme_io_md": false, 00:04:53.120 "write_zeroes": true, 00:04:53.120 "zcopy": true, 00:04:53.120 "get_zone_info": false, 00:04:53.120 "zone_management": false, 00:04:53.120 "zone_append": false, 00:04:53.120 "compare": false, 00:04:53.120 "compare_and_write": false, 00:04:53.120 "abort": true, 00:04:53.120 "seek_hole": false, 00:04:53.120 "seek_data": false, 00:04:53.120 "copy": true, 00:04:53.120 "nvme_iov_md": false 00:04:53.120 }, 00:04:53.120 "memory_domains": [ 00:04:53.120 { 00:04:53.120 "dma_device_id": "system", 00:04:53.120 "dma_device_type": 1 00:04:53.120 }, 00:04:53.120 { 00:04:53.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.120 "dma_device_type": 2 00:04:53.120 } 00:04:53.120 ], 00:04:53.120 "driver_specific": {} 00:04:53.120 }, 00:04:53.120 { 00:04:53.120 "name": "Passthru0", 00:04:53.120 "aliases": [ 00:04:53.120 "8499585c-9d7e-5c57-a7d6-2f5992b0b276" 00:04:53.120 ], 00:04:53.120 "product_name": "passthru", 00:04:53.120 "block_size": 512, 00:04:53.120 "num_blocks": 16384, 00:04:53.120 "uuid": "8499585c-9d7e-5c57-a7d6-2f5992b0b276", 00:04:53.120 "assigned_rate_limits": { 00:04:53.120 "rw_ios_per_sec": 0, 00:04:53.120 "rw_mbytes_per_sec": 0, 00:04:53.120 "r_mbytes_per_sec": 0, 00:04:53.120 "w_mbytes_per_sec": 0 00:04:53.120 }, 00:04:53.120 "claimed": false, 00:04:53.120 "zoned": false, 00:04:53.120 "supported_io_types": { 00:04:53.120 "read": true, 00:04:53.120 "write": true, 00:04:53.120 "unmap": true, 00:04:53.120 "flush": true, 00:04:53.120 "reset": true, 00:04:53.120 "nvme_admin": false, 00:04:53.120 "nvme_io": false, 00:04:53.120 "nvme_io_md": false, 00:04:53.120 "write_zeroes": true, 00:04:53.120 "zcopy": true, 00:04:53.120 "get_zone_info": false, 00:04:53.120 "zone_management": false, 00:04:53.120 "zone_append": false, 00:04:53.120 "compare": false, 00:04:53.120 "compare_and_write": false, 00:04:53.120 "abort": true, 00:04:53.120 "seek_hole": false, 00:04:53.120 "seek_data": false, 00:04:53.120 "copy": true, 00:04:53.120 "nvme_iov_md": false 00:04:53.120 }, 00:04:53.120 "memory_domains": [ 00:04:53.120 { 00:04:53.120 "dma_device_id": "system", 00:04:53.120 "dma_device_type": 1 00:04:53.120 }, 00:04:53.120 { 00:04:53.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.120 "dma_device_type": 2 00:04:53.120 } 00:04:53.120 ], 00:04:53.120 "driver_specific": { 00:04:53.120 "passthru": { 00:04:53.120 "name": "Passthru0", 00:04:53.120 "base_bdev_name": "Malloc2" 00:04:53.120 } 00:04:53.120 } 00:04:53.120 } 00:04:53.120 ]' 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.120 00:04:53.120 real 0m0.314s 00:04:53.120 user 0m0.209s 00:04:53.120 sys 0m0.038s 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.120 ************************************ 00:04:53.120 END TEST rpc_daemon_integrity 00:04:53.120 08:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.120 ************************************ 00:04:53.379 08:39:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.379 08:39:00 rpc -- rpc/rpc.sh@84 -- # killprocess 57884 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 57884 ']' 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@958 -- # kill -0 57884 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@959 -- # uname 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57884 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.379 killing process with pid 57884 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57884' 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@973 -- # kill 57884 00:04:53.379 08:39:00 rpc -- common/autotest_common.sh@978 -- # wait 57884 00:04:53.379 00:04:53.379 real 0m2.178s 00:04:53.379 user 0m2.934s 00:04:53.379 sys 0m0.546s 00:04:53.379 08:39:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.379 08:39:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 ************************************ 00:04:53.379 END TEST rpc 00:04:53.379 ************************************ 00:04:53.638 08:39:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.638 08:39:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.638 08:39:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.638 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.638 ************************************ 00:04:53.638 START TEST skip_rpc 00:04:53.638 ************************************ 00:04:53.638 08:39:01 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.638 * Looking for test storage... 00:04:53.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.638 08:39:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.638 08:39:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.638 08:39:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.638 08:39:01 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.638 08:39:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.639 08:39:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.639 --rc genhtml_branch_coverage=1 00:04:53.639 --rc genhtml_function_coverage=1 00:04:53.639 --rc genhtml_legend=1 00:04:53.639 --rc geninfo_all_blocks=1 00:04:53.639 --rc geninfo_unexecuted_blocks=1 00:04:53.639 00:04:53.639 ' 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.639 --rc genhtml_branch_coverage=1 00:04:53.639 --rc genhtml_function_coverage=1 00:04:53.639 --rc genhtml_legend=1 00:04:53.639 --rc geninfo_all_blocks=1 00:04:53.639 --rc geninfo_unexecuted_blocks=1 00:04:53.639 00:04:53.639 ' 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.639 --rc genhtml_branch_coverage=1 00:04:53.639 --rc genhtml_function_coverage=1 00:04:53.639 --rc genhtml_legend=1 00:04:53.639 --rc geninfo_all_blocks=1 00:04:53.639 --rc geninfo_unexecuted_blocks=1 00:04:53.639 00:04:53.639 ' 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.639 --rc genhtml_branch_coverage=1 00:04:53.639 --rc genhtml_function_coverage=1 00:04:53.639 --rc genhtml_legend=1 00:04:53.639 --rc geninfo_all_blocks=1 00:04:53.639 --rc geninfo_unexecuted_blocks=1 00:04:53.639 00:04:53.639 ' 00:04:53.639 08:39:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.639 08:39:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.639 08:39:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.639 08:39:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.639 ************************************ 00:04:53.639 START TEST skip_rpc 00:04:53.639 ************************************ 00:04:53.639 08:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:53.639 08:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58077 00:04:53.639 08:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.639 08:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.639 08:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.898 [2024-12-11 08:39:01.469027] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:04:53.898 [2024-12-11 08:39:01.469124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:04:53.898 [2024-12-11 08:39:01.613864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.898 [2024-12-11 08:39:01.645359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.157 [2024-12-11 08:39:01.685408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58077 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58077 ']' 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58077 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58077 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.431 killing process with pid 58077 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58077' 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58077 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58077 00:04:59.431 00:04:59.431 real 0m5.273s 00:04:59.431 user 0m4.995s 00:04:59.431 sys 0m0.196s 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.431 08:39:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.431 ************************************ 00:04:59.431 END TEST skip_rpc 00:04:59.431 ************************************ 00:04:59.431 08:39:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:59.431 08:39:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.431 08:39:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.431 08:39:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.431 ************************************ 00:04:59.431 START TEST skip_rpc_with_json 00:04:59.431 ************************************ 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58164 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58164 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58164 ']' 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.431 08:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.431 [2024-12-11 08:39:06.793408] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:04:59.431 [2024-12-11 08:39:06.793515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58164 ] 00:04:59.431 [2024-12-11 08:39:06.936904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.431 [2024-12-11 08:39:06.966381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.431 [2024-12-11 08:39:07.003368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.000 [2024-12-11 08:39:07.747045] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:00.000 request: 00:05:00.000 { 00:05:00.000 "trtype": "tcp", 00:05:00.000 "method": "nvmf_get_transports", 00:05:00.000 "req_id": 1 00:05:00.000 } 00:05:00.000 Got JSON-RPC error response 00:05:00.000 response: 00:05:00.000 { 00:05:00.000 "code": -19, 00:05:00.000 "message": "No such device" 00:05:00.000 } 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.000 [2024-12-11 08:39:07.763159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.000 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.260 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.260 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.260 { 00:05:00.260 "subsystems": [ 00:05:00.260 { 00:05:00.260 "subsystem": "fsdev", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "fsdev_set_opts", 00:05:00.260 "params": { 00:05:00.260 "fsdev_io_pool_size": 65535, 00:05:00.260 "fsdev_io_cache_size": 256 00:05:00.260 } 00:05:00.260 } 00:05:00.260 ] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "keyring", 00:05:00.260 "config": [] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "iobuf", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "iobuf_set_options", 00:05:00.260 "params": { 00:05:00.260 "small_pool_count": 8192, 00:05:00.260 "large_pool_count": 1024, 00:05:00.260 "small_bufsize": 8192, 00:05:00.260 "large_bufsize": 135168, 00:05:00.260 "enable_numa": false 00:05:00.260 } 00:05:00.260 } 00:05:00.260 ] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "sock", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "sock_set_default_impl", 00:05:00.260 "params": { 00:05:00.260 "impl_name": "uring" 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "sock_impl_set_options", 00:05:00.260 "params": { 00:05:00.260 "impl_name": "ssl", 00:05:00.260 "recv_buf_size": 4096, 00:05:00.260 "send_buf_size": 4096, 00:05:00.260 "enable_recv_pipe": true, 00:05:00.260 "enable_quickack": false, 00:05:00.260 "enable_placement_id": 0, 00:05:00.260 "enable_zerocopy_send_server": true, 00:05:00.260 "enable_zerocopy_send_client": false, 00:05:00.260 "zerocopy_threshold": 0, 00:05:00.260 "tls_version": 0, 00:05:00.260 "enable_ktls": false 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "sock_impl_set_options", 00:05:00.260 "params": { 00:05:00.260 "impl_name": "posix", 00:05:00.260 "recv_buf_size": 2097152, 00:05:00.260 "send_buf_size": 2097152, 00:05:00.260 "enable_recv_pipe": true, 00:05:00.260 "enable_quickack": false, 00:05:00.260 "enable_placement_id": 0, 00:05:00.260 "enable_zerocopy_send_server": true, 00:05:00.260 "enable_zerocopy_send_client": false, 00:05:00.260 "zerocopy_threshold": 0, 00:05:00.260 "tls_version": 0, 00:05:00.260 "enable_ktls": false 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "sock_impl_set_options", 00:05:00.260 "params": { 00:05:00.260 "impl_name": "uring", 00:05:00.260 "recv_buf_size": 2097152, 00:05:00.260 "send_buf_size": 2097152, 00:05:00.260 "enable_recv_pipe": true, 00:05:00.260 "enable_quickack": false, 00:05:00.260 "enable_placement_id": 0, 00:05:00.260 "enable_zerocopy_send_server": false, 00:05:00.260 "enable_zerocopy_send_client": false, 00:05:00.260 "zerocopy_threshold": 0, 00:05:00.260 "tls_version": 0, 00:05:00.260 "enable_ktls": false 00:05:00.260 } 00:05:00.260 } 00:05:00.260 ] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "vmd", 00:05:00.260 "config": [] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "accel", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "accel_set_options", 00:05:00.260 "params": { 00:05:00.260 "small_cache_size": 128, 00:05:00.260 "large_cache_size": 16, 00:05:00.260 "task_count": 2048, 00:05:00.260 "sequence_count": 2048, 00:05:00.260 "buf_count": 2048 00:05:00.260 } 00:05:00.260 } 00:05:00.260 ] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "bdev", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "bdev_set_options", 00:05:00.260 "params": { 00:05:00.260 "bdev_io_pool_size": 65535, 00:05:00.260 "bdev_io_cache_size": 256, 00:05:00.260 "bdev_auto_examine": true, 00:05:00.260 "iobuf_small_cache_size": 128, 00:05:00.260 "iobuf_large_cache_size": 16 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "bdev_raid_set_options", 00:05:00.260 "params": { 00:05:00.260 "process_window_size_kb": 1024, 00:05:00.260 "process_max_bandwidth_mb_sec": 0 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "bdev_iscsi_set_options", 00:05:00.260 "params": { 00:05:00.260 "timeout_sec": 30 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "bdev_nvme_set_options", 00:05:00.260 "params": { 00:05:00.260 "action_on_timeout": "none", 00:05:00.260 "timeout_us": 0, 00:05:00.260 "timeout_admin_us": 0, 00:05:00.260 "keep_alive_timeout_ms": 10000, 00:05:00.260 "arbitration_burst": 0, 00:05:00.260 "low_priority_weight": 0, 00:05:00.260 "medium_priority_weight": 0, 00:05:00.260 "high_priority_weight": 0, 00:05:00.260 "nvme_adminq_poll_period_us": 10000, 00:05:00.260 "nvme_ioq_poll_period_us": 0, 00:05:00.260 "io_queue_requests": 0, 00:05:00.260 "delay_cmd_submit": true, 00:05:00.260 "transport_retry_count": 4, 00:05:00.260 "bdev_retry_count": 3, 00:05:00.260 "transport_ack_timeout": 0, 00:05:00.260 "ctrlr_loss_timeout_sec": 0, 00:05:00.260 "reconnect_delay_sec": 0, 00:05:00.260 "fast_io_fail_timeout_sec": 0, 00:05:00.260 "disable_auto_failback": false, 00:05:00.260 "generate_uuids": false, 00:05:00.260 "transport_tos": 0, 00:05:00.260 "nvme_error_stat": false, 00:05:00.260 "rdma_srq_size": 0, 00:05:00.260 "io_path_stat": false, 00:05:00.260 "allow_accel_sequence": false, 00:05:00.260 "rdma_max_cq_size": 0, 00:05:00.260 "rdma_cm_event_timeout_ms": 0, 00:05:00.260 "dhchap_digests": [ 00:05:00.260 "sha256", 00:05:00.260 "sha384", 00:05:00.260 "sha512" 00:05:00.260 ], 00:05:00.260 "dhchap_dhgroups": [ 00:05:00.260 "null", 00:05:00.260 "ffdhe2048", 00:05:00.260 "ffdhe3072", 00:05:00.260 "ffdhe4096", 00:05:00.260 "ffdhe6144", 00:05:00.260 "ffdhe8192" 00:05:00.260 ], 00:05:00.260 "rdma_umr_per_io": false 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "bdev_nvme_set_hotplug", 00:05:00.260 "params": { 00:05:00.260 "period_us": 100000, 00:05:00.260 "enable": false 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "bdev_wait_for_examine" 00:05:00.260 } 00:05:00.260 ] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "scsi", 00:05:00.260 "config": null 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "scheduler", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "framework_set_scheduler", 00:05:00.260 "params": { 00:05:00.260 "name": "static" 00:05:00.260 } 00:05:00.260 } 00:05:00.260 ] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "vhost_scsi", 00:05:00.260 "config": [] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "vhost_blk", 00:05:00.260 "config": [] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "ublk", 00:05:00.260 "config": [] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "nbd", 00:05:00.260 "config": [] 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "subsystem": "nvmf", 00:05:00.260 "config": [ 00:05:00.260 { 00:05:00.260 "method": "nvmf_set_config", 00:05:00.260 "params": { 00:05:00.260 "discovery_filter": "match_any", 00:05:00.260 "admin_cmd_passthru": { 00:05:00.260 "identify_ctrlr": false 00:05:00.260 }, 00:05:00.260 "dhchap_digests": [ 00:05:00.260 "sha256", 00:05:00.260 "sha384", 00:05:00.260 "sha512" 00:05:00.260 ], 00:05:00.260 "dhchap_dhgroups": [ 00:05:00.260 "null", 00:05:00.260 "ffdhe2048", 00:05:00.260 "ffdhe3072", 00:05:00.260 "ffdhe4096", 00:05:00.260 "ffdhe6144", 00:05:00.260 "ffdhe8192" 00:05:00.260 ] 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "nvmf_set_max_subsystems", 00:05:00.260 "params": { 00:05:00.260 "max_subsystems": 1024 00:05:00.260 } 00:05:00.260 }, 00:05:00.260 { 00:05:00.260 "method": "nvmf_set_crdt", 00:05:00.260 "params": { 00:05:00.260 "crdt1": 0, 00:05:00.260 "crdt2": 0, 00:05:00.261 "crdt3": 0 00:05:00.261 } 00:05:00.261 }, 00:05:00.261 { 00:05:00.261 "method": "nvmf_create_transport", 00:05:00.261 "params": { 00:05:00.261 "trtype": "TCP", 00:05:00.261 "max_queue_depth": 128, 00:05:00.261 "max_io_qpairs_per_ctrlr": 127, 00:05:00.261 "in_capsule_data_size": 4096, 00:05:00.261 "max_io_size": 131072, 00:05:00.261 "io_unit_size": 131072, 00:05:00.261 "max_aq_depth": 128, 00:05:00.261 "num_shared_buffers": 511, 00:05:00.261 "buf_cache_size": 4294967295, 00:05:00.261 "dif_insert_or_strip": false, 00:05:00.261 "zcopy": false, 00:05:00.261 "c2h_success": true, 00:05:00.261 "sock_priority": 0, 00:05:00.261 "abort_timeout_sec": 1, 00:05:00.261 "ack_timeout": 0, 00:05:00.261 "data_wr_pool_size": 0 00:05:00.261 } 00:05:00.261 } 00:05:00.261 ] 00:05:00.261 }, 00:05:00.261 { 00:05:00.261 "subsystem": "iscsi", 00:05:00.261 "config": [ 00:05:00.261 { 00:05:00.261 "method": "iscsi_set_options", 00:05:00.261 "params": { 00:05:00.261 "node_base": "iqn.2016-06.io.spdk", 00:05:00.261 "max_sessions": 128, 00:05:00.261 "max_connections_per_session": 2, 00:05:00.261 "max_queue_depth": 64, 00:05:00.261 "default_time2wait": 2, 00:05:00.261 "default_time2retain": 20, 00:05:00.261 "first_burst_length": 8192, 00:05:00.261 "immediate_data": true, 00:05:00.261 "allow_duplicated_isid": false, 00:05:00.261 "error_recovery_level": 0, 00:05:00.261 "nop_timeout": 60, 00:05:00.261 "nop_in_interval": 30, 00:05:00.261 "disable_chap": false, 00:05:00.261 "require_chap": false, 00:05:00.261 "mutual_chap": false, 00:05:00.261 "chap_group": 0, 00:05:00.261 "max_large_datain_per_connection": 64, 00:05:00.261 "max_r2t_per_connection": 4, 00:05:00.261 "pdu_pool_size": 36864, 00:05:00.261 "immediate_data_pool_size": 16384, 00:05:00.261 "data_out_pool_size": 2048 00:05:00.261 } 00:05:00.261 } 00:05:00.261 ] 00:05:00.261 } 00:05:00.261 ] 00:05:00.261 } 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58164 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58164 ']' 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58164 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58164 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58164' 00:05:00.261 killing process with pid 58164 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58164 00:05:00.261 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58164 00:05:00.520 08:39:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58191 00:05:00.520 08:39:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.520 08:39:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58191 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58191 ']' 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58191 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58191 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.792 killing process with pid 58191 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58191' 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58191 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58191 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.792 00:05:05.792 real 0m6.746s 00:05:05.792 user 0m6.693s 00:05:05.792 sys 0m0.435s 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.792 ************************************ 00:05:05.792 END TEST skip_rpc_with_json 00:05:05.792 ************************************ 00:05:05.792 08:39:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.792 08:39:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.792 08:39:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.792 08:39:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.792 ************************************ 00:05:05.792 START TEST skip_rpc_with_delay 00:05:05.792 ************************************ 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:05.792 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.052 [2024-12-11 08:39:13.594186] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.052 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:06.052 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.052 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.052 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.052 00:05:06.052 real 0m0.091s 00:05:06.052 user 0m0.063s 00:05:06.052 sys 0m0.024s 00:05:06.052 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.052 08:39:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 ************************************ 00:05:06.052 END TEST skip_rpc_with_delay 00:05:06.052 ************************************ 00:05:06.052 08:39:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.052 08:39:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.052 08:39:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.052 08:39:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.052 08:39:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.052 08:39:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 ************************************ 00:05:06.052 START TEST exit_on_failed_rpc_init 00:05:06.052 ************************************ 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58295 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58295 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58295 ']' 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.052 08:39:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 [2024-12-11 08:39:13.728378] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:06.052 [2024-12-11 08:39:13.728458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58295 ] 00:05:06.311 [2024-12-11 08:39:13.866670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.311 [2024-12-11 08:39:13.896264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.311 [2024-12-11 08:39:13.933330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.311 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.570 [2024-12-11 08:39:14.120822] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:06.570 [2024-12-11 08:39:14.120944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58306 ] 00:05:06.570 [2024-12-11 08:39:14.272773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.570 [2024-12-11 08:39:14.311872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.570 [2024-12-11 08:39:14.311993] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.570 [2024-12-11 08:39:14.312017] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.570 [2024-12-11 08:39:14.312032] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.828 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:06.828 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58295 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58295 ']' 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58295 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58295 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.829 killing process with pid 58295 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58295' 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58295 00:05:06.829 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58295 00:05:07.087 00:05:07.087 real 0m0.959s 00:05:07.087 user 0m1.143s 00:05:07.087 sys 0m0.259s 00:05:07.087 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.087 08:39:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.087 ************************************ 00:05:07.087 END TEST exit_on_failed_rpc_init 00:05:07.088 ************************************ 00:05:07.088 08:39:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.088 00:05:07.088 real 0m13.479s 00:05:07.088 user 0m13.074s 00:05:07.088 sys 0m1.124s 00:05:07.088 08:39:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.088 08:39:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.088 ************************************ 00:05:07.088 END TEST skip_rpc 00:05:07.088 ************************************ 00:05:07.088 08:39:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.088 08:39:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.088 08:39:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.088 08:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.088 ************************************ 00:05:07.088 START TEST rpc_client 00:05:07.088 ************************************ 00:05:07.088 08:39:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.088 * Looking for test storage... 00:05:07.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:07.088 08:39:14 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.088 08:39:14 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.088 08:39:14 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.346 08:39:14 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.346 08:39:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:07.346 08:39:14 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.347 08:39:14 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.347 --rc genhtml_branch_coverage=1 00:05:07.347 --rc genhtml_function_coverage=1 00:05:07.347 --rc genhtml_legend=1 00:05:07.347 --rc geninfo_all_blocks=1 00:05:07.347 --rc geninfo_unexecuted_blocks=1 00:05:07.347 00:05:07.347 ' 00:05:07.347 08:39:14 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.347 --rc genhtml_branch_coverage=1 00:05:07.347 --rc genhtml_function_coverage=1 00:05:07.347 --rc genhtml_legend=1 00:05:07.347 --rc geninfo_all_blocks=1 00:05:07.347 --rc geninfo_unexecuted_blocks=1 00:05:07.347 00:05:07.347 ' 00:05:07.347 08:39:14 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.347 --rc genhtml_branch_coverage=1 00:05:07.347 --rc genhtml_function_coverage=1 00:05:07.347 --rc genhtml_legend=1 00:05:07.347 --rc geninfo_all_blocks=1 00:05:07.347 --rc geninfo_unexecuted_blocks=1 00:05:07.347 00:05:07.347 ' 00:05:07.347 08:39:14 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.347 --rc genhtml_branch_coverage=1 00:05:07.347 --rc genhtml_function_coverage=1 00:05:07.347 --rc genhtml_legend=1 00:05:07.347 --rc geninfo_all_blocks=1 00:05:07.347 --rc geninfo_unexecuted_blocks=1 00:05:07.347 00:05:07.347 ' 00:05:07.347 08:39:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:07.347 OK 00:05:07.347 08:39:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.347 00:05:07.347 real 0m0.199s 00:05:07.347 user 0m0.115s 00:05:07.347 sys 0m0.096s 00:05:07.347 08:39:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.347 ************************************ 00:05:07.347 END TEST rpc_client 00:05:07.347 ************************************ 00:05:07.347 08:39:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.347 08:39:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.347 08:39:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.347 08:39:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.347 08:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.347 ************************************ 00:05:07.347 START TEST json_config 00:05:07.347 ************************************ 00:05:07.347 08:39:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.347 08:39:15 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.347 08:39:15 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.347 08:39:15 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.347 08:39:15 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.347 08:39:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.347 08:39:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.347 08:39:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.347 08:39:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.347 08:39:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.347 08:39:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.347 08:39:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.347 08:39:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.347 08:39:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.605 08:39:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.605 08:39:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.605 08:39:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:07.605 08:39:15 json_config -- scripts/common.sh@345 -- # : 1 00:05:07.605 08:39:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.605 08:39:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.605 08:39:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:07.605 08:39:15 json_config -- scripts/common.sh@353 -- # local d=1 00:05:07.605 08:39:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.605 08:39:15 json_config -- scripts/common.sh@355 -- # echo 1 00:05:07.605 08:39:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.605 08:39:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:07.605 08:39:15 json_config -- scripts/common.sh@353 -- # local d=2 00:05:07.605 08:39:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.605 08:39:15 json_config -- scripts/common.sh@355 -- # echo 2 00:05:07.605 08:39:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.605 08:39:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.605 08:39:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.605 08:39:15 json_config -- scripts/common.sh@368 -- # return 0 00:05:07.605 08:39:15 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.605 08:39:15 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.605 --rc genhtml_branch_coverage=1 00:05:07.605 --rc genhtml_function_coverage=1 00:05:07.605 --rc genhtml_legend=1 00:05:07.605 --rc geninfo_all_blocks=1 00:05:07.605 --rc geninfo_unexecuted_blocks=1 00:05:07.605 00:05:07.605 ' 00:05:07.605 08:39:15 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.605 --rc genhtml_branch_coverage=1 00:05:07.605 --rc genhtml_function_coverage=1 00:05:07.605 --rc genhtml_legend=1 00:05:07.605 --rc geninfo_all_blocks=1 00:05:07.605 --rc geninfo_unexecuted_blocks=1 00:05:07.605 00:05:07.605 ' 00:05:07.605 08:39:15 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.605 --rc genhtml_branch_coverage=1 00:05:07.606 --rc genhtml_function_coverage=1 00:05:07.606 --rc genhtml_legend=1 00:05:07.606 --rc geninfo_all_blocks=1 00:05:07.606 --rc geninfo_unexecuted_blocks=1 00:05:07.606 00:05:07.606 ' 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.606 --rc genhtml_branch_coverage=1 00:05:07.606 --rc genhtml_function_coverage=1 00:05:07.606 --rc genhtml_legend=1 00:05:07.606 --rc geninfo_all_blocks=1 00:05:07.606 --rc geninfo_unexecuted_blocks=1 00:05:07.606 00:05:07.606 ' 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.606 08:39:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.606 08:39:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.606 08:39:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.606 08:39:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.606 08:39:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.606 08:39:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.606 08:39:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.606 08:39:15 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.606 08:39:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@51 -- # : 0 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.606 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.606 08:39:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.606 INFO: JSON configuration test init 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.606 08:39:15 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.606 08:39:15 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.606 08:39:15 json_config -- json_config/common.sh@10 -- # shift 00:05:07.606 08:39:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.606 08:39:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.606 08:39:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.606 08:39:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.606 08:39:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.606 08:39:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58440 00:05:07.606 Waiting for target to run... 00:05:07.606 08:39:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.606 08:39:15 json_config -- json_config/common.sh@25 -- # waitforlisten 58440 /var/tmp/spdk_tgt.sock 00:05:07.606 08:39:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@835 -- # '[' -z 58440 ']' 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.606 08:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.606 [2024-12-11 08:39:15.224784] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:07.606 [2024-12-11 08:39:15.224871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58440 ] 00:05:07.864 [2024-12-11 08:39:15.504718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.864 [2024-12-11 08:39:15.526445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.799 08:39:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.799 00:05:08.799 08:39:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:08.799 08:39:16 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.799 08:39:16 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:08.799 08:39:16 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:08.799 08:39:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.799 08:39:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.799 08:39:16 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:08.799 08:39:16 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:08.799 08:39:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.799 08:39:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.799 08:39:16 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.799 08:39:16 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:08.799 08:39:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.057 [2024-12-11 08:39:16.575425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:09.057 08:39:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.057 08:39:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:09.057 08:39:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:09.057 08:39:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@54 -- # sort 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:09.315 08:39:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:09.315 08:39:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.315 08:39:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:09.315 08:39:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.315 08:39:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:09.315 08:39:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.315 08:39:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.574 MallocForNvmf0 00:05:09.574 08:39:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.574 08:39:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.832 MallocForNvmf1 00:05:09.832 08:39:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.832 08:39:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.090 [2024-12-11 08:39:17.670423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.090 08:39:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.090 08:39:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.348 08:39:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.348 08:39:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.606 08:39:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.606 08:39:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.606 08:39:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.606 08:39:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.865 [2024-12-11 08:39:18.550913] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.865 08:39:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:10.865 08:39:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.865 08:39:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.865 08:39:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:10.865 08:39:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.865 08:39:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.124 08:39:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:11.124 08:39:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.124 08:39:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.124 MallocBdevForConfigChangeCheck 00:05:11.124 08:39:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:11.124 08:39:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.124 08:39:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.383 08:39:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:11.383 08:39:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.642 INFO: shutting down applications... 00:05:11.642 08:39:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:11.642 08:39:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:11.642 08:39:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:11.642 08:39:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:11.642 08:39:19 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.901 Calling clear_iscsi_subsystem 00:05:11.901 Calling clear_nvmf_subsystem 00:05:11.901 Calling clear_nbd_subsystem 00:05:11.901 Calling clear_ublk_subsystem 00:05:11.901 Calling clear_vhost_blk_subsystem 00:05:11.901 Calling clear_vhost_scsi_subsystem 00:05:11.901 Calling clear_bdev_subsystem 00:05:11.901 08:39:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:11.901 08:39:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:11.901 08:39:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:11.901 08:39:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.901 08:39:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.901 08:39:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.468 08:39:20 json_config -- json_config/json_config.sh@352 -- # break 00:05:12.468 08:39:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:12.468 08:39:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:12.468 08:39:20 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.468 08:39:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.468 08:39:20 json_config -- json_config/common.sh@35 -- # [[ -n 58440 ]] 00:05:12.468 08:39:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58440 00:05:12.468 08:39:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.468 08:39:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.468 08:39:20 json_config -- json_config/common.sh@41 -- # kill -0 58440 00:05:12.468 08:39:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.036 08:39:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.036 08:39:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.036 08:39:20 json_config -- json_config/common.sh@41 -- # kill -0 58440 00:05:13.036 08:39:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.036 08:39:20 json_config -- json_config/common.sh@43 -- # break 00:05:13.036 SPDK target shutdown done 00:05:13.036 08:39:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.036 08:39:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.036 INFO: relaunching applications... 00:05:13.036 08:39:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:13.036 08:39:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.036 08:39:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.036 08:39:20 json_config -- json_config/common.sh@10 -- # shift 00:05:13.036 08:39:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.036 08:39:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.036 08:39:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.036 08:39:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.036 08:39:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.036 08:39:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58635 00:05:13.036 Waiting for target to run... 00:05:13.036 08:39:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.036 08:39:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.036 08:39:20 json_config -- json_config/common.sh@25 -- # waitforlisten 58635 /var/tmp/spdk_tgt.sock 00:05:13.036 08:39:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 58635 ']' 00:05:13.036 08:39:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.036 08:39:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.036 08:39:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.036 08:39:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.036 08:39:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.036 [2024-12-11 08:39:20.605741] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:13.036 [2024-12-11 08:39:20.606108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58635 ] 00:05:13.296 [2024-12-11 08:39:20.914203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.296 [2024-12-11 08:39:20.935410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.296 [2024-12-11 08:39:21.065215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.555 [2024-12-11 08:39:21.259188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.555 [2024-12-11 08:39:21.291243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.814 00:05:13.814 INFO: Checking if target configuration is the same... 00:05:13.814 08:39:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.814 08:39:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:13.814 08:39:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.814 08:39:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:13.814 08:39:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.814 08:39:21 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.814 08:39:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:13.814 08:39:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.814 + '[' 2 -ne 2 ']' 00:05:13.814 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:13.814 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:13.814 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:13.814 +++ basename /dev/fd/62 00:05:13.814 ++ mktemp /tmp/62.XXX 00:05:13.814 + tmp_file_1=/tmp/62.swr 00:05:13.814 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.814 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.814 + tmp_file_2=/tmp/spdk_tgt_config.json.skj 00:05:13.814 + ret=0 00:05:13.814 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.382 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.382 + diff -u /tmp/62.swr /tmp/spdk_tgt_config.json.skj 00:05:14.382 INFO: JSON config files are the same 00:05:14.382 + echo 'INFO: JSON config files are the same' 00:05:14.382 + rm /tmp/62.swr /tmp/spdk_tgt_config.json.skj 00:05:14.382 + exit 0 00:05:14.382 INFO: changing configuration and checking if this can be detected... 00:05:14.382 08:39:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:14.382 08:39:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.382 08:39:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.382 08:39:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.640 08:39:22 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.640 08:39:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:14.640 08:39:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.640 + '[' 2 -ne 2 ']' 00:05:14.640 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:14.640 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:14.640 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:14.640 +++ basename /dev/fd/62 00:05:14.640 ++ mktemp /tmp/62.XXX 00:05:14.640 + tmp_file_1=/tmp/62.CJB 00:05:14.640 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.640 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.640 + tmp_file_2=/tmp/spdk_tgt_config.json.u65 00:05:14.640 + ret=0 00:05:14.640 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.899 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.158 + diff -u /tmp/62.CJB /tmp/spdk_tgt_config.json.u65 00:05:15.158 + ret=1 00:05:15.158 + echo '=== Start of file: /tmp/62.CJB ===' 00:05:15.158 + cat /tmp/62.CJB 00:05:15.158 + echo '=== End of file: /tmp/62.CJB ===' 00:05:15.158 + echo '' 00:05:15.158 + echo '=== Start of file: /tmp/spdk_tgt_config.json.u65 ===' 00:05:15.158 + cat /tmp/spdk_tgt_config.json.u65 00:05:15.158 + echo '=== End of file: /tmp/spdk_tgt_config.json.u65 ===' 00:05:15.158 + echo '' 00:05:15.158 + rm /tmp/62.CJB /tmp/spdk_tgt_config.json.u65 00:05:15.158 + exit 1 00:05:15.158 INFO: configuration change detected. 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 58635 ]] 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.158 08:39:22 json_config -- json_config/json_config.sh@330 -- # killprocess 58635 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@954 -- # '[' -z 58635 ']' 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@958 -- # kill -0 58635 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@959 -- # uname 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58635 00:05:15.158 killing process with pid 58635 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58635' 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@973 -- # kill 58635 00:05:15.158 08:39:22 json_config -- common/autotest_common.sh@978 -- # wait 58635 00:05:15.419 08:39:22 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:15.419 08:39:22 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:15.419 08:39:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.419 08:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.419 INFO: Success 00:05:15.419 08:39:22 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:15.419 08:39:22 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:15.419 00:05:15.419 real 0m8.021s 00:05:15.419 user 0m11.560s 00:05:15.419 sys 0m1.381s 00:05:15.419 08:39:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.419 08:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.419 ************************************ 00:05:15.419 END TEST json_config 00:05:15.419 ************************************ 00:05:15.419 08:39:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:15.419 08:39:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.419 08:39:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.419 08:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:15.419 ************************************ 00:05:15.419 START TEST json_config_extra_key 00:05:15.419 ************************************ 00:05:15.419 08:39:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:15.419 08:39:23 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.419 08:39:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.419 08:39:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.679 08:39:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:15.679 08:39:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.679 08:39:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.679 --rc genhtml_branch_coverage=1 00:05:15.679 --rc genhtml_function_coverage=1 00:05:15.679 --rc genhtml_legend=1 00:05:15.679 --rc geninfo_all_blocks=1 00:05:15.679 --rc geninfo_unexecuted_blocks=1 00:05:15.679 00:05:15.679 ' 00:05:15.679 08:39:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.679 --rc genhtml_branch_coverage=1 00:05:15.679 --rc genhtml_function_coverage=1 00:05:15.679 --rc genhtml_legend=1 00:05:15.679 --rc geninfo_all_blocks=1 00:05:15.679 --rc geninfo_unexecuted_blocks=1 00:05:15.679 00:05:15.679 ' 00:05:15.679 08:39:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.679 --rc genhtml_branch_coverage=1 00:05:15.679 --rc genhtml_function_coverage=1 00:05:15.679 --rc genhtml_legend=1 00:05:15.679 --rc geninfo_all_blocks=1 00:05:15.679 --rc geninfo_unexecuted_blocks=1 00:05:15.679 00:05:15.679 ' 00:05:15.679 08:39:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.679 --rc genhtml_branch_coverage=1 00:05:15.679 --rc genhtml_function_coverage=1 00:05:15.679 --rc genhtml_legend=1 00:05:15.679 --rc geninfo_all_blocks=1 00:05:15.679 --rc geninfo_unexecuted_blocks=1 00:05:15.679 00:05:15.679 ' 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.679 08:39:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.679 08:39:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.679 08:39:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.679 08:39:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.679 08:39:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.679 08:39:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.679 08:39:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.679 INFO: launching applications... 00:05:15.679 Waiting for target to run... 00:05:15.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.679 08:39:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.679 08:39:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.680 08:39:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58784 00:05:15.680 08:39:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.680 08:39:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58784 /var/tmp/spdk_tgt.sock 00:05:15.680 08:39:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58784 ']' 00:05:15.680 08:39:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.680 08:39:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.680 08:39:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:15.680 08:39:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.680 08:39:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.680 08:39:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.680 [2024-12-11 08:39:23.375476] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:15.680 [2024-12-11 08:39:23.375886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58784 ] 00:05:15.939 [2024-12-11 08:39:23.666926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.939 [2024-12-11 08:39:23.690897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.197 [2024-12-11 08:39:23.717883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.764 08:39:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.764 08:39:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:16.764 08:39:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:16.764 00:05:16.764 08:39:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:16.764 INFO: shutting down applications... 00:05:16.764 08:39:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:16.764 08:39:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:16.764 08:39:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.764 08:39:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58784 ]] 00:05:16.764 08:39:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58784 00:05:16.765 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.765 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.765 08:39:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58784 00:05:16.765 08:39:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58784 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.333 SPDK target shutdown done 00:05:17.333 Success 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.333 08:39:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.333 08:39:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.333 00:05:17.333 real 0m1.817s 00:05:17.333 user 0m1.647s 00:05:17.333 sys 0m0.326s 00:05:17.333 08:39:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.333 08:39:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.333 ************************************ 00:05:17.333 END TEST json_config_extra_key 00:05:17.333 ************************************ 00:05:17.333 08:39:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.333 08:39:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.333 08:39:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.333 08:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.333 ************************************ 00:05:17.333 START TEST alias_rpc 00:05:17.333 ************************************ 00:05:17.333 08:39:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.333 * Looking for test storage... 00:05:17.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:17.333 08:39:25 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.333 08:39:25 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.333 08:39:25 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.592 08:39:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.592 --rc genhtml_branch_coverage=1 00:05:17.592 --rc genhtml_function_coverage=1 00:05:17.592 --rc genhtml_legend=1 00:05:17.592 --rc geninfo_all_blocks=1 00:05:17.592 --rc geninfo_unexecuted_blocks=1 00:05:17.592 00:05:17.592 ' 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.592 --rc genhtml_branch_coverage=1 00:05:17.592 --rc genhtml_function_coverage=1 00:05:17.592 --rc genhtml_legend=1 00:05:17.592 --rc geninfo_all_blocks=1 00:05:17.592 --rc geninfo_unexecuted_blocks=1 00:05:17.592 00:05:17.592 ' 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.592 --rc genhtml_branch_coverage=1 00:05:17.592 --rc genhtml_function_coverage=1 00:05:17.592 --rc genhtml_legend=1 00:05:17.592 --rc geninfo_all_blocks=1 00:05:17.592 --rc geninfo_unexecuted_blocks=1 00:05:17.592 00:05:17.592 ' 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.592 --rc genhtml_branch_coverage=1 00:05:17.592 --rc genhtml_function_coverage=1 00:05:17.592 --rc genhtml_legend=1 00:05:17.592 --rc geninfo_all_blocks=1 00:05:17.592 --rc geninfo_unexecuted_blocks=1 00:05:17.592 00:05:17.592 ' 00:05:17.592 08:39:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.592 08:39:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58862 00:05:17.592 08:39:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58862 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58862 ']' 00:05:17.592 08:39:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.592 08:39:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.593 08:39:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.593 08:39:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.593 08:39:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.593 08:39:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.593 [2024-12-11 08:39:25.193120] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:17.593 [2024-12-11 08:39:25.193461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58862 ] 00:05:17.593 [2024-12-11 08:39:25.334467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.593 [2024-12-11 08:39:25.364760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.851 [2024-12-11 08:39:25.402425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.851 08:39:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.851 08:39:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.851 08:39:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:18.110 08:39:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58862 00:05:18.110 08:39:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58862 ']' 00:05:18.110 08:39:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58862 00:05:18.110 08:39:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.110 08:39:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.110 08:39:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58862 00:05:18.369 killing process with pid 58862 00:05:18.369 08:39:25 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.369 08:39:25 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.369 08:39:25 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58862' 00:05:18.369 08:39:25 alias_rpc -- common/autotest_common.sh@973 -- # kill 58862 00:05:18.369 08:39:25 alias_rpc -- common/autotest_common.sh@978 -- # wait 58862 00:05:18.369 ************************************ 00:05:18.369 END TEST alias_rpc 00:05:18.369 ************************************ 00:05:18.369 00:05:18.369 real 0m1.183s 00:05:18.369 user 0m1.394s 00:05:18.369 sys 0m0.311s 00:05:18.369 08:39:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.369 08:39:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.628 08:39:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:18.628 08:39:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:18.628 08:39:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.628 08:39:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.628 08:39:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.628 ************************************ 00:05:18.628 START TEST spdkcli_tcp 00:05:18.628 ************************************ 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:18.628 * Looking for test storage... 00:05:18.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.628 08:39:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.628 --rc genhtml_branch_coverage=1 00:05:18.628 --rc genhtml_function_coverage=1 00:05:18.628 --rc genhtml_legend=1 00:05:18.628 --rc geninfo_all_blocks=1 00:05:18.628 --rc geninfo_unexecuted_blocks=1 00:05:18.628 00:05:18.628 ' 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.628 --rc genhtml_branch_coverage=1 00:05:18.628 --rc genhtml_function_coverage=1 00:05:18.628 --rc genhtml_legend=1 00:05:18.628 --rc geninfo_all_blocks=1 00:05:18.628 --rc geninfo_unexecuted_blocks=1 00:05:18.628 00:05:18.628 ' 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.628 --rc genhtml_branch_coverage=1 00:05:18.628 --rc genhtml_function_coverage=1 00:05:18.628 --rc genhtml_legend=1 00:05:18.628 --rc geninfo_all_blocks=1 00:05:18.628 --rc geninfo_unexecuted_blocks=1 00:05:18.628 00:05:18.628 ' 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.628 --rc genhtml_branch_coverage=1 00:05:18.628 --rc genhtml_function_coverage=1 00:05:18.628 --rc genhtml_legend=1 00:05:18.628 --rc geninfo_all_blocks=1 00:05:18.628 --rc geninfo_unexecuted_blocks=1 00:05:18.628 00:05:18.628 ' 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58933 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58933 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58933 ']' 00:05:18.628 08:39:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.628 08:39:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.887 [2024-12-11 08:39:26.422659] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:18.887 [2024-12-11 08:39:26.422776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:05:18.887 [2024-12-11 08:39:26.568779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.887 [2024-12-11 08:39:26.604332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.887 [2024-12-11 08:39:26.604342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.887 [2024-12-11 08:39:26.645291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.823 08:39:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.823 08:39:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:19.823 08:39:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58950 00:05:19.823 08:39:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.823 08:39:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.823 [ 00:05:19.823 "bdev_malloc_delete", 00:05:19.823 "bdev_malloc_create", 00:05:19.823 "bdev_null_resize", 00:05:19.823 "bdev_null_delete", 00:05:19.823 "bdev_null_create", 00:05:19.823 "bdev_nvme_cuse_unregister", 00:05:19.823 "bdev_nvme_cuse_register", 00:05:19.823 "bdev_opal_new_user", 00:05:19.823 "bdev_opal_set_lock_state", 00:05:19.823 "bdev_opal_delete", 00:05:19.823 "bdev_opal_get_info", 00:05:19.823 "bdev_opal_create", 00:05:19.823 "bdev_nvme_opal_revert", 00:05:19.823 "bdev_nvme_opal_init", 00:05:19.823 "bdev_nvme_send_cmd", 00:05:19.823 "bdev_nvme_set_keys", 00:05:19.823 "bdev_nvme_get_path_iostat", 00:05:19.823 "bdev_nvme_get_mdns_discovery_info", 00:05:19.823 "bdev_nvme_stop_mdns_discovery", 00:05:19.823 "bdev_nvme_start_mdns_discovery", 00:05:19.823 "bdev_nvme_set_multipath_policy", 00:05:19.823 "bdev_nvme_set_preferred_path", 00:05:19.823 "bdev_nvme_get_io_paths", 00:05:19.823 "bdev_nvme_remove_error_injection", 00:05:19.823 "bdev_nvme_add_error_injection", 00:05:19.823 "bdev_nvme_get_discovery_info", 00:05:19.823 "bdev_nvme_stop_discovery", 00:05:19.823 "bdev_nvme_start_discovery", 00:05:19.823 "bdev_nvme_get_controller_health_info", 00:05:19.823 "bdev_nvme_disable_controller", 00:05:19.823 "bdev_nvme_enable_controller", 00:05:19.823 "bdev_nvme_reset_controller", 00:05:19.823 "bdev_nvme_get_transport_statistics", 00:05:19.823 "bdev_nvme_apply_firmware", 00:05:19.823 "bdev_nvme_detach_controller", 00:05:19.823 "bdev_nvme_get_controllers", 00:05:19.823 "bdev_nvme_attach_controller", 00:05:19.823 "bdev_nvme_set_hotplug", 00:05:19.823 "bdev_nvme_set_options", 00:05:19.823 "bdev_passthru_delete", 00:05:19.823 "bdev_passthru_create", 00:05:19.823 "bdev_lvol_set_parent_bdev", 00:05:19.823 "bdev_lvol_set_parent", 00:05:19.823 "bdev_lvol_check_shallow_copy", 00:05:19.823 "bdev_lvol_start_shallow_copy", 00:05:19.823 "bdev_lvol_grow_lvstore", 00:05:19.823 "bdev_lvol_get_lvols", 00:05:19.823 "bdev_lvol_get_lvstores", 00:05:19.823 "bdev_lvol_delete", 00:05:19.823 "bdev_lvol_set_read_only", 00:05:19.823 "bdev_lvol_resize", 00:05:19.823 "bdev_lvol_decouple_parent", 00:05:19.823 "bdev_lvol_inflate", 00:05:19.823 "bdev_lvol_rename", 00:05:19.823 "bdev_lvol_clone_bdev", 00:05:19.823 "bdev_lvol_clone", 00:05:19.823 "bdev_lvol_snapshot", 00:05:19.823 "bdev_lvol_create", 00:05:19.823 "bdev_lvol_delete_lvstore", 00:05:19.823 "bdev_lvol_rename_lvstore", 00:05:19.823 "bdev_lvol_create_lvstore", 00:05:19.823 "bdev_raid_set_options", 00:05:19.823 "bdev_raid_remove_base_bdev", 00:05:19.823 "bdev_raid_add_base_bdev", 00:05:19.823 "bdev_raid_delete", 00:05:19.823 "bdev_raid_create", 00:05:19.823 "bdev_raid_get_bdevs", 00:05:19.823 "bdev_error_inject_error", 00:05:19.823 "bdev_error_delete", 00:05:19.823 "bdev_error_create", 00:05:19.823 "bdev_split_delete", 00:05:19.823 "bdev_split_create", 00:05:19.823 "bdev_delay_delete", 00:05:19.823 "bdev_delay_create", 00:05:19.823 "bdev_delay_update_latency", 00:05:19.823 "bdev_zone_block_delete", 00:05:19.823 "bdev_zone_block_create", 00:05:19.823 "blobfs_create", 00:05:19.823 "blobfs_detect", 00:05:19.823 "blobfs_set_cache_size", 00:05:19.823 "bdev_aio_delete", 00:05:19.823 "bdev_aio_rescan", 00:05:19.823 "bdev_aio_create", 00:05:19.823 "bdev_ftl_set_property", 00:05:19.823 "bdev_ftl_get_properties", 00:05:19.823 "bdev_ftl_get_stats", 00:05:19.823 "bdev_ftl_unmap", 00:05:19.823 "bdev_ftl_unload", 00:05:19.823 "bdev_ftl_delete", 00:05:19.823 "bdev_ftl_load", 00:05:19.823 "bdev_ftl_create", 00:05:19.823 "bdev_virtio_attach_controller", 00:05:19.823 "bdev_virtio_scsi_get_devices", 00:05:19.823 "bdev_virtio_detach_controller", 00:05:19.823 "bdev_virtio_blk_set_hotplug", 00:05:19.823 "bdev_iscsi_delete", 00:05:19.823 "bdev_iscsi_create", 00:05:19.823 "bdev_iscsi_set_options", 00:05:19.823 "bdev_uring_delete", 00:05:19.823 "bdev_uring_rescan", 00:05:19.823 "bdev_uring_create", 00:05:19.823 "accel_error_inject_error", 00:05:19.823 "ioat_scan_accel_module", 00:05:19.823 "dsa_scan_accel_module", 00:05:19.823 "iaa_scan_accel_module", 00:05:19.823 "keyring_file_remove_key", 00:05:19.823 "keyring_file_add_key", 00:05:19.823 "keyring_linux_set_options", 00:05:19.823 "fsdev_aio_delete", 00:05:19.823 "fsdev_aio_create", 00:05:19.823 "iscsi_get_histogram", 00:05:19.823 "iscsi_enable_histogram", 00:05:19.823 "iscsi_set_options", 00:05:19.823 "iscsi_get_auth_groups", 00:05:19.823 "iscsi_auth_group_remove_secret", 00:05:19.823 "iscsi_auth_group_add_secret", 00:05:19.824 "iscsi_delete_auth_group", 00:05:19.824 "iscsi_create_auth_group", 00:05:19.824 "iscsi_set_discovery_auth", 00:05:19.824 "iscsi_get_options", 00:05:19.824 "iscsi_target_node_request_logout", 00:05:19.824 "iscsi_target_node_set_redirect", 00:05:19.824 "iscsi_target_node_set_auth", 00:05:19.824 "iscsi_target_node_add_lun", 00:05:19.824 "iscsi_get_stats", 00:05:19.824 "iscsi_get_connections", 00:05:19.824 "iscsi_portal_group_set_auth", 00:05:19.824 "iscsi_start_portal_group", 00:05:19.824 "iscsi_delete_portal_group", 00:05:19.824 "iscsi_create_portal_group", 00:05:19.824 "iscsi_get_portal_groups", 00:05:19.824 "iscsi_delete_target_node", 00:05:19.824 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.824 "iscsi_target_node_add_pg_ig_maps", 00:05:19.824 "iscsi_create_target_node", 00:05:19.824 "iscsi_get_target_nodes", 00:05:19.824 "iscsi_delete_initiator_group", 00:05:19.824 "iscsi_initiator_group_remove_initiators", 00:05:19.824 "iscsi_initiator_group_add_initiators", 00:05:19.824 "iscsi_create_initiator_group", 00:05:19.824 "iscsi_get_initiator_groups", 00:05:19.824 "nvmf_set_crdt", 00:05:19.824 "nvmf_set_config", 00:05:19.824 "nvmf_set_max_subsystems", 00:05:19.824 "nvmf_stop_mdns_prr", 00:05:19.824 "nvmf_publish_mdns_prr", 00:05:19.824 "nvmf_subsystem_get_listeners", 00:05:19.824 "nvmf_subsystem_get_qpairs", 00:05:19.824 "nvmf_subsystem_get_controllers", 00:05:19.824 "nvmf_get_stats", 00:05:19.824 "nvmf_get_transports", 00:05:19.824 "nvmf_create_transport", 00:05:19.824 "nvmf_get_targets", 00:05:19.824 "nvmf_delete_target", 00:05:19.824 "nvmf_create_target", 00:05:19.824 "nvmf_subsystem_allow_any_host", 00:05:19.824 "nvmf_subsystem_set_keys", 00:05:19.824 "nvmf_subsystem_remove_host", 00:05:19.824 "nvmf_subsystem_add_host", 00:05:19.824 "nvmf_ns_remove_host", 00:05:19.824 "nvmf_ns_add_host", 00:05:19.824 "nvmf_subsystem_remove_ns", 00:05:19.824 "nvmf_subsystem_set_ns_ana_group", 00:05:19.824 "nvmf_subsystem_add_ns", 00:05:19.824 "nvmf_subsystem_listener_set_ana_state", 00:05:19.824 "nvmf_discovery_get_referrals", 00:05:19.824 "nvmf_discovery_remove_referral", 00:05:19.824 "nvmf_discovery_add_referral", 00:05:19.824 "nvmf_subsystem_remove_listener", 00:05:19.824 "nvmf_subsystem_add_listener", 00:05:19.824 "nvmf_delete_subsystem", 00:05:19.824 "nvmf_create_subsystem", 00:05:19.824 "nvmf_get_subsystems", 00:05:19.824 "env_dpdk_get_mem_stats", 00:05:19.824 "nbd_get_disks", 00:05:19.824 "nbd_stop_disk", 00:05:19.824 "nbd_start_disk", 00:05:19.824 "ublk_recover_disk", 00:05:19.824 "ublk_get_disks", 00:05:19.824 "ublk_stop_disk", 00:05:19.824 "ublk_start_disk", 00:05:19.824 "ublk_destroy_target", 00:05:19.824 "ublk_create_target", 00:05:19.824 "virtio_blk_create_transport", 00:05:19.824 "virtio_blk_get_transports", 00:05:19.824 "vhost_controller_set_coalescing", 00:05:19.824 "vhost_get_controllers", 00:05:19.824 "vhost_delete_controller", 00:05:19.824 "vhost_create_blk_controller", 00:05:19.824 "vhost_scsi_controller_remove_target", 00:05:19.824 "vhost_scsi_controller_add_target", 00:05:19.824 "vhost_start_scsi_controller", 00:05:19.824 "vhost_create_scsi_controller", 00:05:19.824 "thread_set_cpumask", 00:05:19.824 "scheduler_set_options", 00:05:19.824 "framework_get_governor", 00:05:19.824 "framework_get_scheduler", 00:05:19.824 "framework_set_scheduler", 00:05:19.824 "framework_get_reactors", 00:05:19.824 "thread_get_io_channels", 00:05:19.824 "thread_get_pollers", 00:05:19.824 "thread_get_stats", 00:05:19.824 "framework_monitor_context_switch", 00:05:19.824 "spdk_kill_instance", 00:05:19.824 "log_enable_timestamps", 00:05:19.824 "log_get_flags", 00:05:19.824 "log_clear_flag", 00:05:19.824 "log_set_flag", 00:05:19.824 "log_get_level", 00:05:19.824 "log_set_level", 00:05:19.824 "log_get_print_level", 00:05:19.824 "log_set_print_level", 00:05:19.824 "framework_enable_cpumask_locks", 00:05:19.824 "framework_disable_cpumask_locks", 00:05:19.824 "framework_wait_init", 00:05:19.824 "framework_start_init", 00:05:19.824 "scsi_get_devices", 00:05:19.824 "bdev_get_histogram", 00:05:19.824 "bdev_enable_histogram", 00:05:19.824 "bdev_set_qos_limit", 00:05:19.824 "bdev_set_qd_sampling_period", 00:05:19.824 "bdev_get_bdevs", 00:05:19.824 "bdev_reset_iostat", 00:05:19.824 "bdev_get_iostat", 00:05:19.824 "bdev_examine", 00:05:19.824 "bdev_wait_for_examine", 00:05:19.824 "bdev_set_options", 00:05:19.824 "accel_get_stats", 00:05:19.824 "accel_set_options", 00:05:19.824 "accel_set_driver", 00:05:19.824 "accel_crypto_key_destroy", 00:05:19.824 "accel_crypto_keys_get", 00:05:19.824 "accel_crypto_key_create", 00:05:19.824 "accel_assign_opc", 00:05:19.824 "accel_get_module_info", 00:05:19.824 "accel_get_opc_assignments", 00:05:19.824 "vmd_rescan", 00:05:19.824 "vmd_remove_device", 00:05:19.824 "vmd_enable", 00:05:19.824 "sock_get_default_impl", 00:05:19.824 "sock_set_default_impl", 00:05:19.824 "sock_impl_set_options", 00:05:19.824 "sock_impl_get_options", 00:05:19.824 "iobuf_get_stats", 00:05:19.824 "iobuf_set_options", 00:05:19.824 "keyring_get_keys", 00:05:19.824 "framework_get_pci_devices", 00:05:19.824 "framework_get_config", 00:05:19.824 "framework_get_subsystems", 00:05:19.824 "fsdev_set_opts", 00:05:19.824 "fsdev_get_opts", 00:05:19.824 "trace_get_info", 00:05:19.824 "trace_get_tpoint_group_mask", 00:05:19.824 "trace_disable_tpoint_group", 00:05:19.824 "trace_enable_tpoint_group", 00:05:19.824 "trace_clear_tpoint_mask", 00:05:19.824 "trace_set_tpoint_mask", 00:05:19.824 "notify_get_notifications", 00:05:19.824 "notify_get_types", 00:05:19.824 "spdk_get_version", 00:05:19.824 "rpc_get_methods" 00:05:19.824 ] 00:05:19.824 08:39:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.824 08:39:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.824 08:39:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.084 08:39:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:20.084 08:39:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58933 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58933 ']' 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58933 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58933 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.084 killing process with pid 58933 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58933' 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58933 00:05:20.084 08:39:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58933 00:05:20.343 ************************************ 00:05:20.343 END TEST spdkcli_tcp 00:05:20.343 ************************************ 00:05:20.343 00:05:20.343 real 0m1.733s 00:05:20.343 user 0m3.262s 00:05:20.343 sys 0m0.359s 00:05:20.343 08:39:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.343 08:39:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.343 08:39:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.343 08:39:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.343 08:39:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.343 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.343 ************************************ 00:05:20.343 START TEST dpdk_mem_utility 00:05:20.343 ************************************ 00:05:20.343 08:39:27 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.343 * Looking for test storage... 00:05:20.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:20.343 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.343 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.343 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.343 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.343 08:39:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.603 08:39:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.603 --rc genhtml_branch_coverage=1 00:05:20.603 --rc genhtml_function_coverage=1 00:05:20.603 --rc genhtml_legend=1 00:05:20.603 --rc geninfo_all_blocks=1 00:05:20.603 --rc geninfo_unexecuted_blocks=1 00:05:20.603 00:05:20.603 ' 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.603 --rc genhtml_branch_coverage=1 00:05:20.603 --rc genhtml_function_coverage=1 00:05:20.603 --rc genhtml_legend=1 00:05:20.603 --rc geninfo_all_blocks=1 00:05:20.603 --rc geninfo_unexecuted_blocks=1 00:05:20.603 00:05:20.603 ' 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.603 --rc genhtml_branch_coverage=1 00:05:20.603 --rc genhtml_function_coverage=1 00:05:20.603 --rc genhtml_legend=1 00:05:20.603 --rc geninfo_all_blocks=1 00:05:20.603 --rc geninfo_unexecuted_blocks=1 00:05:20.603 00:05:20.603 ' 00:05:20.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.603 --rc genhtml_branch_coverage=1 00:05:20.603 --rc genhtml_function_coverage=1 00:05:20.603 --rc genhtml_legend=1 00:05:20.603 --rc geninfo_all_blocks=1 00:05:20.603 --rc geninfo_unexecuted_blocks=1 00:05:20.603 00:05:20.603 ' 00:05:20.603 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:20.603 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59032 00:05:20.603 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59032 00:05:20.603 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.603 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.603 [2024-12-11 08:39:28.196517] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:20.603 [2024-12-11 08:39:28.197278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:20.603 [2024-12-11 08:39:28.344565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.603 [2024-12-11 08:39:28.373684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.863 [2024-12-11 08:39:28.409876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.863 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.863 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:20.863 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.863 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.863 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.863 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.863 { 00:05:20.863 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.863 } 00:05:20.863 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.863 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:20.863 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:20.863 1 heaps totaling size 818.000000 MiB 00:05:20.863 size: 818.000000 MiB heap id: 0 00:05:20.863 end heaps---------- 00:05:20.863 9 mempools totaling size 603.782043 MiB 00:05:20.863 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.863 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.863 size: 100.555481 MiB name: bdev_io_59032 00:05:20.863 size: 50.003479 MiB name: msgpool_59032 00:05:20.863 size: 36.509338 MiB name: fsdev_io_59032 00:05:20.863 size: 21.763794 MiB name: PDU_Pool 00:05:20.863 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.863 size: 4.133484 MiB name: evtpool_59032 00:05:20.863 size: 0.026123 MiB name: Session_Pool 00:05:20.863 end mempools------- 00:05:20.863 6 memzones totaling size 4.142822 MiB 00:05:20.863 size: 1.000366 MiB name: RG_ring_0_59032 00:05:20.863 size: 1.000366 MiB name: RG_ring_1_59032 00:05:20.863 size: 1.000366 MiB name: RG_ring_4_59032 00:05:20.863 size: 1.000366 MiB name: RG_ring_5_59032 00:05:20.863 size: 0.125366 MiB name: RG_ring_2_59032 00:05:20.863 size: 0.015991 MiB name: RG_ring_3_59032 00:05:20.863 end memzones------- 00:05:20.863 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:21.124 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:05:21.124 list of free elements. size: 10.802490 MiB 00:05:21.124 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:21.124 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:21.124 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:21.124 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:21.124 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:21.124 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:21.124 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:21.124 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:21.124 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:05:21.124 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:21.124 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:21.124 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:21.124 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:21.124 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:21.124 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:21.124 list of standard malloc elements. size: 199.268616 MiB 00:05:21.124 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:21.124 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:21.124 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:21.124 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:21.124 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:21.124 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:21.124 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:21.124 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:21.124 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:21.124 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:21.124 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:21.124 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:21.124 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:21.124 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:21.124 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:21.124 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:21.125 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:21.125 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:21.125 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:21.125 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:21.126 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:21.126 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:21.126 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:21.126 list of memzone associated elements. size: 607.928894 MiB 00:05:21.126 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:21.126 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:21.126 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:21.126 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:21.126 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:21.126 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59032_0 00:05:21.126 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:21.126 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59032_0 00:05:21.126 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:21.126 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59032_0 00:05:21.126 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:21.126 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:21.126 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:21.126 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:21.126 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:21.126 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59032_0 00:05:21.126 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:21.126 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59032 00:05:21.126 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:21.126 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59032 00:05:21.126 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:21.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:21.126 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:21.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:21.126 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:21.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:21.126 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:21.126 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:21.126 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:21.126 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59032 00:05:21.126 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:21.126 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59032 00:05:21.126 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:21.126 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59032 00:05:21.126 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:21.126 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59032 00:05:21.126 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:21.126 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59032 00:05:21.126 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:21.126 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59032 00:05:21.126 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:21.126 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:21.126 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:21.126 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:21.126 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:21.126 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:21.127 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:21.127 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59032 00:05:21.127 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:21.127 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59032 00:05:21.127 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:21.127 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:21.127 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:21.127 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:21.127 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:21.127 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59032 00:05:21.127 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:21.127 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:21.127 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:21.127 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59032 00:05:21.127 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:21.127 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59032 00:05:21.127 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:21.127 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59032 00:05:21.127 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:21.127 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:21.127 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:21.127 08:39:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59032 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59032 ']' 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59032 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59032 00:05:21.127 killing process with pid 59032 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59032' 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59032 00:05:21.127 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59032 00:05:21.386 ************************************ 00:05:21.386 END TEST dpdk_mem_utility 00:05:21.386 ************************************ 00:05:21.386 00:05:21.386 real 0m1.007s 00:05:21.386 user 0m1.117s 00:05:21.386 sys 0m0.283s 00:05:21.386 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.386 08:39:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.386 08:39:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:21.386 08:39:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.386 08:39:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.386 08:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.386 ************************************ 00:05:21.386 START TEST event 00:05:21.386 ************************************ 00:05:21.386 08:39:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:21.386 * Looking for test storage... 00:05:21.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:21.386 08:39:29 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.386 08:39:29 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.386 08:39:29 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.645 08:39:29 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.645 08:39:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.645 08:39:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.645 08:39:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.645 08:39:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.645 08:39:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.645 08:39:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.645 08:39:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.645 08:39:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.645 08:39:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.645 08:39:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.645 08:39:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.645 08:39:29 event -- scripts/common.sh@344 -- # case "$op" in 00:05:21.645 08:39:29 event -- scripts/common.sh@345 -- # : 1 00:05:21.645 08:39:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.645 08:39:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.645 08:39:29 event -- scripts/common.sh@365 -- # decimal 1 00:05:21.645 08:39:29 event -- scripts/common.sh@353 -- # local d=1 00:05:21.645 08:39:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.645 08:39:29 event -- scripts/common.sh@355 -- # echo 1 00:05:21.645 08:39:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.645 08:39:29 event -- scripts/common.sh@366 -- # decimal 2 00:05:21.645 08:39:29 event -- scripts/common.sh@353 -- # local d=2 00:05:21.645 08:39:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.645 08:39:29 event -- scripts/common.sh@355 -- # echo 2 00:05:21.645 08:39:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.645 08:39:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.645 08:39:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.645 08:39:29 event -- scripts/common.sh@368 -- # return 0 00:05:21.645 08:39:29 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.645 08:39:29 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.645 --rc genhtml_branch_coverage=1 00:05:21.645 --rc genhtml_function_coverage=1 00:05:21.645 --rc genhtml_legend=1 00:05:21.645 --rc geninfo_all_blocks=1 00:05:21.645 --rc geninfo_unexecuted_blocks=1 00:05:21.646 00:05:21.646 ' 00:05:21.646 08:39:29 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.646 --rc genhtml_branch_coverage=1 00:05:21.646 --rc genhtml_function_coverage=1 00:05:21.646 --rc genhtml_legend=1 00:05:21.646 --rc geninfo_all_blocks=1 00:05:21.646 --rc geninfo_unexecuted_blocks=1 00:05:21.646 00:05:21.646 ' 00:05:21.646 08:39:29 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.646 --rc genhtml_branch_coverage=1 00:05:21.646 --rc genhtml_function_coverage=1 00:05:21.646 --rc genhtml_legend=1 00:05:21.646 --rc geninfo_all_blocks=1 00:05:21.646 --rc geninfo_unexecuted_blocks=1 00:05:21.646 00:05:21.646 ' 00:05:21.646 08:39:29 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.646 --rc genhtml_branch_coverage=1 00:05:21.646 --rc genhtml_function_coverage=1 00:05:21.646 --rc genhtml_legend=1 00:05:21.646 --rc geninfo_all_blocks=1 00:05:21.646 --rc geninfo_unexecuted_blocks=1 00:05:21.646 00:05:21.646 ' 00:05:21.646 08:39:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:21.646 08:39:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.646 08:39:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.646 08:39:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:21.646 08:39:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.646 08:39:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.646 ************************************ 00:05:21.646 START TEST event_perf 00:05:21.646 ************************************ 00:05:21.646 08:39:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.646 Running I/O for 1 seconds...[2024-12-11 08:39:29.232386] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:21.646 [2024-12-11 08:39:29.232607] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59104 ] 00:05:21.646 [2024-12-11 08:39:29.376495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.646 [2024-12-11 08:39:29.406183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.646 [2024-12-11 08:39:29.406309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.646 [2024-12-11 08:39:29.406392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.646 [2024-12-11 08:39:29.406395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.021 Running I/O for 1 seconds... 00:05:23.021 lcore 0: 199441 00:05:23.021 lcore 1: 199438 00:05:23.021 lcore 2: 199439 00:05:23.021 lcore 3: 199438 00:05:23.021 done. 00:05:23.021 00:05:23.021 real 0m1.237s 00:05:23.021 ************************************ 00:05:23.021 END TEST event_perf 00:05:23.021 ************************************ 00:05:23.021 user 0m4.077s 00:05:23.021 sys 0m0.039s 00:05:23.021 08:39:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.021 08:39:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.021 08:39:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:23.021 08:39:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:23.021 08:39:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.021 08:39:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.021 ************************************ 00:05:23.021 START TEST event_reactor 00:05:23.021 ************************************ 00:05:23.021 08:39:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:23.021 [2024-12-11 08:39:30.522615] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:23.021 [2024-12-11 08:39:30.522684] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59137 ] 00:05:23.021 [2024-12-11 08:39:30.659035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.021 [2024-12-11 08:39:30.688339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.958 test_start 00:05:23.958 oneshot 00:05:23.958 tick 100 00:05:23.958 tick 100 00:05:23.958 tick 250 00:05:23.958 tick 100 00:05:23.958 tick 100 00:05:23.958 tick 100 00:05:23.958 tick 500 00:05:23.958 tick 250 00:05:23.958 tick 100 00:05:23.958 tick 100 00:05:23.958 tick 250 00:05:23.958 tick 100 00:05:23.958 tick 100 00:05:23.958 test_end 00:05:23.958 00:05:23.958 real 0m1.226s 00:05:23.958 user 0m1.091s 00:05:23.958 sys 0m0.030s 00:05:23.958 ************************************ 00:05:23.958 END TEST event_reactor 00:05:23.958 ************************************ 00:05:23.958 08:39:31 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.958 08:39:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:24.218 08:39:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.218 08:39:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:24.218 08:39:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.218 08:39:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.218 ************************************ 00:05:24.218 START TEST event_reactor_perf 00:05:24.218 ************************************ 00:05:24.218 08:39:31 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.218 [2024-12-11 08:39:31.802793] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:24.218 [2024-12-11 08:39:31.802890] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:05:24.218 [2024-12-11 08:39:31.947219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.218 [2024-12-11 08:39:31.979574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.671 test_start 00:05:25.671 test_end 00:05:25.671 Performance: 420342 events per second 00:05:25.671 00:05:25.671 real 0m1.231s 00:05:25.671 user 0m1.084s 00:05:25.671 sys 0m0.042s 00:05:25.671 08:39:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.671 ************************************ 00:05:25.671 END TEST event_reactor_perf 00:05:25.671 ************************************ 00:05:25.671 08:39:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.671 08:39:33 event -- event/event.sh@49 -- # uname -s 00:05:25.671 08:39:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.671 08:39:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:25.671 08:39:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.671 08:39:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.671 08:39:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.671 ************************************ 00:05:25.671 START TEST event_scheduler 00:05:25.671 ************************************ 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:25.671 * Looking for test storage... 00:05:25.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.671 08:39:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.671 --rc genhtml_branch_coverage=1 00:05:25.671 --rc genhtml_function_coverage=1 00:05:25.671 --rc genhtml_legend=1 00:05:25.671 --rc geninfo_all_blocks=1 00:05:25.671 --rc geninfo_unexecuted_blocks=1 00:05:25.671 00:05:25.671 ' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.671 --rc genhtml_branch_coverage=1 00:05:25.671 --rc genhtml_function_coverage=1 00:05:25.671 --rc genhtml_legend=1 00:05:25.671 --rc geninfo_all_blocks=1 00:05:25.671 --rc geninfo_unexecuted_blocks=1 00:05:25.671 00:05:25.671 ' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.671 --rc genhtml_branch_coverage=1 00:05:25.671 --rc genhtml_function_coverage=1 00:05:25.671 --rc genhtml_legend=1 00:05:25.671 --rc geninfo_all_blocks=1 00:05:25.671 --rc geninfo_unexecuted_blocks=1 00:05:25.671 00:05:25.671 ' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.671 --rc genhtml_branch_coverage=1 00:05:25.671 --rc genhtml_function_coverage=1 00:05:25.671 --rc genhtml_legend=1 00:05:25.671 --rc geninfo_all_blocks=1 00:05:25.671 --rc geninfo_unexecuted_blocks=1 00:05:25.671 00:05:25.671 ' 00:05:25.671 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.671 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59242 00:05:25.671 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.671 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.671 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59242 00:05:25.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59242 ']' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.671 08:39:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.671 [2024-12-11 08:39:33.319898] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:25.671 [2024-12-11 08:39:33.319997] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59242 ] 00:05:25.943 [2024-12-11 08:39:33.473592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.943 [2024-12-11 08:39:33.518171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.943 [2024-12-11 08:39:33.518299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.943 [2024-12-11 08:39:33.518415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.943 [2024-12-11 08:39:33.518424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:25.943 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.943 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.943 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.943 POWER: Cannot set governor of lcore 0 to performance 00:05:25.943 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.943 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.943 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.943 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.943 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:25.943 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:25.943 POWER: Unable to set Power Management Environment for lcore 0 00:05:25.943 [2024-12-11 08:39:33.608734] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:25.943 [2024-12-11 08:39:33.608845] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:25.943 [2024-12-11 08:39:33.608896] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.943 [2024-12-11 08:39:33.609017] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.943 [2024-12-11 08:39:33.609130] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.943 [2024-12-11 08:39:33.609306] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.943 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 [2024-12-11 08:39:33.646305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.943 [2024-12-11 08:39:33.664322] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.943 08:39:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 ************************************ 00:05:25.943 START TEST scheduler_create_thread 00:05:25.943 ************************************ 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 2 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 3 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 4 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.943 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 5 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 6 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 7 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 8 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 9 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 10 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.203 08:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.579 08:39:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.579 08:39:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.580 08:39:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.580 08:39:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.580 08:39:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.955 ************************************ 00:05:28.955 END TEST scheduler_create_thread 00:05:28.955 ************************************ 00:05:28.955 08:39:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.955 00:05:28.955 real 0m2.614s 00:05:28.955 user 0m0.018s 00:05:28.955 sys 0m0.007s 00:05:28.955 08:39:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.955 08:39:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.955 08:39:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.955 08:39:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59242 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59242 ']' 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59242 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59242 00:05:28.955 killing process with pid 59242 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59242' 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59242 00:05:28.955 08:39:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59242 00:05:29.215 [2024-12-11 08:39:36.771744] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.215 ************************************ 00:05:29.215 END TEST event_scheduler 00:05:29.215 ************************************ 00:05:29.215 00:05:29.215 real 0m3.835s 00:05:29.215 user 0m5.790s 00:05:29.215 sys 0m0.307s 00:05:29.215 08:39:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.215 08:39:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.215 08:39:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.215 08:39:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.215 08:39:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.215 08:39:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.215 08:39:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.215 ************************************ 00:05:29.215 START TEST app_repeat 00:05:29.215 ************************************ 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.215 Process app_repeat pid: 59327 00:05:29.215 spdk_app_start Round 0 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59327 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59327' 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.215 08:39:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59327 /var/tmp/spdk-nbd.sock 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59327 ']' 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.215 08:39:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.474 [2024-12-11 08:39:36.991892] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:29.474 [2024-12-11 08:39:36.991986] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:05:29.474 [2024-12-11 08:39:37.132704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.474 [2024-12-11 08:39:37.162886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.474 [2024-12-11 08:39:37.162894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.475 [2024-12-11 08:39:37.192810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.733 08:39:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.733 08:39:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.734 08:39:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.734 Malloc0 00:05:29.734 08:39:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.301 Malloc1 00:05:30.301 08:39:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.301 08:39:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.561 /dev/nbd0 00:05:30.561 08:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.561 08:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.561 1+0 records in 00:05:30.561 1+0 records out 00:05:30.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292045 s, 14.0 MB/s 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.561 08:39:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.561 08:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.561 08:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.561 08:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.820 /dev/nbd1 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.820 1+0 records in 00:05:30.820 1+0 records out 00:05:30.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030277 s, 13.5 MB/s 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.820 08:39:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.820 08:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.079 { 00:05:31.079 "nbd_device": "/dev/nbd0", 00:05:31.079 "bdev_name": "Malloc0" 00:05:31.079 }, 00:05:31.079 { 00:05:31.079 "nbd_device": "/dev/nbd1", 00:05:31.079 "bdev_name": "Malloc1" 00:05:31.079 } 00:05:31.079 ]' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.079 { 00:05:31.079 "nbd_device": "/dev/nbd0", 00:05:31.079 "bdev_name": "Malloc0" 00:05:31.079 }, 00:05:31.079 { 00:05:31.079 "nbd_device": "/dev/nbd1", 00:05:31.079 "bdev_name": "Malloc1" 00:05:31.079 } 00:05:31.079 ]' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.079 /dev/nbd1' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.079 /dev/nbd1' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.079 256+0 records in 00:05:31.079 256+0 records out 00:05:31.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974032 s, 108 MB/s 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.079 256+0 records in 00:05:31.079 256+0 records out 00:05:31.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231386 s, 45.3 MB/s 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.079 256+0 records in 00:05:31.079 256+0 records out 00:05:31.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257221 s, 40.8 MB/s 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.079 08:39:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.339 08:39:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.907 08:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.166 08:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.166 08:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.166 08:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.166 08:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.167 08:39:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.167 08:39:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.426 08:39:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.426 [2024-12-11 08:39:40.133937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.426 [2024-12-11 08:39:40.166125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.426 [2024-12-11 08:39:40.166162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.426 [2024-12-11 08:39:40.196257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.426 [2024-12-11 08:39:40.196411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.426 [2024-12-11 08:39:40.196423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.714 08:39:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.714 spdk_app_start Round 1 00:05:35.714 08:39:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.714 08:39:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59327 /var/tmp/spdk-nbd.sock 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59327 ']' 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.714 08:39:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.714 08:39:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.973 Malloc0 00:05:35.973 08:39:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.233 Malloc1 00:05:36.233 08:39:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.233 08:39:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.492 /dev/nbd0 00:05:36.492 08:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.492 08:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.492 1+0 records in 00:05:36.492 1+0 records out 00:05:36.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297244 s, 13.8 MB/s 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.492 08:39:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.492 08:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.492 08:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.492 08:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.751 /dev/nbd1 00:05:36.751 08:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.751 08:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.751 1+0 records in 00:05:36.751 1+0 records out 00:05:36.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215937 s, 19.0 MB/s 00:05:36.751 08:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.010 08:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:37.010 08:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.010 08:39:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.010 08:39:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:37.010 08:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.010 08:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.010 08:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.010 08:39:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.010 08:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.269 { 00:05:37.269 "nbd_device": "/dev/nbd0", 00:05:37.269 "bdev_name": "Malloc0" 00:05:37.269 }, 00:05:37.269 { 00:05:37.269 "nbd_device": "/dev/nbd1", 00:05:37.269 "bdev_name": "Malloc1" 00:05:37.269 } 00:05:37.269 ]' 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.269 { 00:05:37.269 "nbd_device": "/dev/nbd0", 00:05:37.269 "bdev_name": "Malloc0" 00:05:37.269 }, 00:05:37.269 { 00:05:37.269 "nbd_device": "/dev/nbd1", 00:05:37.269 "bdev_name": "Malloc1" 00:05:37.269 } 00:05:37.269 ]' 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.269 /dev/nbd1' 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.269 /dev/nbd1' 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.269 08:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.270 256+0 records in 00:05:37.270 256+0 records out 00:05:37.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918133 s, 114 MB/s 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.270 256+0 records in 00:05:37.270 256+0 records out 00:05:37.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025517 s, 41.1 MB/s 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.270 256+0 records in 00:05:37.270 256+0 records out 00:05:37.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253513 s, 41.4 MB/s 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.270 08:39:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.541 08:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.109 08:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.370 08:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.370 08:39:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.370 08:39:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.370 08:39:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.370 08:39:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.629 08:39:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.629 [2024-12-11 08:39:46.399331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.888 [2024-12-11 08:39:46.437601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.888 [2024-12-11 08:39:46.437611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.888 [2024-12-11 08:39:46.467116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.888 [2024-12-11 08:39:46.467221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.888 [2024-12-11 08:39:46.467234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.176 spdk_app_start Round 2 00:05:42.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.176 08:39:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.176 08:39:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:42.176 08:39:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59327 /var/tmp/spdk-nbd.sock 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59327 ']' 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.176 08:39:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.176 08:39:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.176 Malloc0 00:05:42.176 08:39:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.435 Malloc1 00:05:42.435 08:39:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.435 08:39:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.694 /dev/nbd0 00:05:42.953 08:39:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.953 08:39:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.953 1+0 records in 00:05:42.953 1+0 records out 00:05:42.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026206 s, 15.6 MB/s 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.953 08:39:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.953 08:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.953 08:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.953 08:39:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.212 /dev/nbd1 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.212 1+0 records in 00:05:43.212 1+0 records out 00:05:43.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263476 s, 15.5 MB/s 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.212 08:39:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.212 08:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.472 { 00:05:43.472 "nbd_device": "/dev/nbd0", 00:05:43.472 "bdev_name": "Malloc0" 00:05:43.472 }, 00:05:43.472 { 00:05:43.472 "nbd_device": "/dev/nbd1", 00:05:43.472 "bdev_name": "Malloc1" 00:05:43.472 } 00:05:43.472 ]' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.472 { 00:05:43.472 "nbd_device": "/dev/nbd0", 00:05:43.472 "bdev_name": "Malloc0" 00:05:43.472 }, 00:05:43.472 { 00:05:43.472 "nbd_device": "/dev/nbd1", 00:05:43.472 "bdev_name": "Malloc1" 00:05:43.472 } 00:05:43.472 ]' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.472 /dev/nbd1' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.472 /dev/nbd1' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.472 256+0 records in 00:05:43.472 256+0 records out 00:05:43.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00575522 s, 182 MB/s 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.472 256+0 records in 00:05:43.472 256+0 records out 00:05:43.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221917 s, 47.3 MB/s 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.472 256+0 records in 00:05:43.472 256+0 records out 00:05:43.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278823 s, 37.6 MB/s 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.472 08:39:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.731 08:39:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.991 08:39:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.250 08:39:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.509 08:39:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.509 08:39:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.768 08:39:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.768 [2024-12-11 08:39:52.516429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.027 [2024-12-11 08:39:52.552882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.027 [2024-12-11 08:39:52.552894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.027 [2024-12-11 08:39:52.583587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.027 [2024-12-11 08:39:52.583709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.027 [2024-12-11 08:39:52.583723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.315 08:39:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59327 /var/tmp/spdk-nbd.sock 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59327 ']' 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.315 08:39:55 event.app_repeat -- event/event.sh@39 -- # killprocess 59327 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59327 ']' 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59327 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59327 00:05:48.315 killing process with pid 59327 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59327' 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59327 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59327 00:05:48.315 spdk_app_start is called in Round 0. 00:05:48.315 Shutdown signal received, stop current app iteration 00:05:48.315 Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 reinitialization... 00:05:48.315 spdk_app_start is called in Round 1. 00:05:48.315 Shutdown signal received, stop current app iteration 00:05:48.315 Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 reinitialization... 00:05:48.315 spdk_app_start is called in Round 2. 00:05:48.315 Shutdown signal received, stop current app iteration 00:05:48.315 Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 reinitialization... 00:05:48.315 spdk_app_start is called in Round 3. 00:05:48.315 Shutdown signal received, stop current app iteration 00:05:48.315 ************************************ 00:05:48.315 END TEST app_repeat 00:05:48.315 ************************************ 00:05:48.315 08:39:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.315 08:39:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.315 00:05:48.315 real 0m18.927s 00:05:48.315 user 0m43.638s 00:05:48.315 sys 0m2.619s 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.315 08:39:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 08:39:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.315 08:39:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:48.315 08:39:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.315 08:39:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.315 08:39:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 ************************************ 00:05:48.315 START TEST cpu_locks 00:05:48.315 ************************************ 00:05:48.315 08:39:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:48.315 * Looking for test storage... 00:05:48.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.315 08:39:56 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.315 08:39:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.315 08:39:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.574 08:39:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.574 08:39:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:48.574 08:39:56 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.574 08:39:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.574 --rc genhtml_branch_coverage=1 00:05:48.574 --rc genhtml_function_coverage=1 00:05:48.574 --rc genhtml_legend=1 00:05:48.574 --rc geninfo_all_blocks=1 00:05:48.574 --rc geninfo_unexecuted_blocks=1 00:05:48.574 00:05:48.574 ' 00:05:48.574 08:39:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.574 --rc genhtml_branch_coverage=1 00:05:48.574 --rc genhtml_function_coverage=1 00:05:48.574 --rc genhtml_legend=1 00:05:48.574 --rc geninfo_all_blocks=1 00:05:48.574 --rc geninfo_unexecuted_blocks=1 00:05:48.574 00:05:48.574 ' 00:05:48.574 08:39:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.574 --rc genhtml_branch_coverage=1 00:05:48.574 --rc genhtml_function_coverage=1 00:05:48.574 --rc genhtml_legend=1 00:05:48.574 --rc geninfo_all_blocks=1 00:05:48.574 --rc geninfo_unexecuted_blocks=1 00:05:48.574 00:05:48.574 ' 00:05:48.574 08:39:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.574 --rc genhtml_branch_coverage=1 00:05:48.574 --rc genhtml_function_coverage=1 00:05:48.574 --rc genhtml_legend=1 00:05:48.574 --rc geninfo_all_blocks=1 00:05:48.574 --rc geninfo_unexecuted_blocks=1 00:05:48.575 00:05:48.575 ' 00:05:48.575 08:39:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.575 08:39:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.575 08:39:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.575 08:39:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.575 08:39:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.575 08:39:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.575 08:39:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.575 ************************************ 00:05:48.575 START TEST default_locks 00:05:48.575 ************************************ 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59767 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59767 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59767 ']' 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.575 08:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.575 [2024-12-11 08:39:56.201682] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:48.575 [2024-12-11 08:39:56.201795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59767 ] 00:05:48.834 [2024-12-11 08:39:56.354896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.834 [2024-12-11 08:39:56.393754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.834 [2024-12-11 08:39:56.441508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.834 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.834 08:39:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:48.834 08:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59767 00:05:48.834 08:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59767 00:05:48.834 08:39:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59767 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59767 ']' 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59767 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59767 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.403 killing process with pid 59767 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59767' 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59767 00:05:49.403 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59767 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59767 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59767 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59767 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59767 ']' 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 ERROR: process (pid: 59767) is no longer running 00:05:49.662 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59767) - No such process 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.662 00:05:49.662 real 0m1.211s 00:05:49.662 user 0m1.278s 00:05:49.662 sys 0m0.464s 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.662 08:39:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 ************************************ 00:05:49.662 END TEST default_locks 00:05:49.662 ************************************ 00:05:49.662 08:39:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.662 08:39:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.662 08:39:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.662 08:39:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 ************************************ 00:05:49.662 START TEST default_locks_via_rpc 00:05:49.662 ************************************ 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59806 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59806 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59806 ']' 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.662 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.922 [2024-12-11 08:39:57.459549] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:49.922 [2024-12-11 08:39:57.459673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:05:49.922 [2024-12-11 08:39:57.603779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.922 [2024-12-11 08:39:57.637121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.922 [2024-12-11 08:39:57.681074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59806 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.181 08:39:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59806 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59806 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59806 ']' 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59806 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59806 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.749 killing process with pid 59806 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59806' 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59806 00:05:50.749 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59806 00:05:51.009 00:05:51.009 real 0m1.136s 00:05:51.009 user 0m1.237s 00:05:51.009 sys 0m0.413s 00:05:51.009 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.009 08:39:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.009 ************************************ 00:05:51.009 END TEST default_locks_via_rpc 00:05:51.009 ************************************ 00:05:51.009 08:39:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.009 08:39:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.009 08:39:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.009 08:39:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.009 ************************************ 00:05:51.009 START TEST non_locking_app_on_locked_coremask 00:05:51.009 ************************************ 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59844 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59844 /var/tmp/spdk.sock 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59844 ']' 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.009 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.009 [2024-12-11 08:39:58.630200] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:51.009 [2024-12-11 08:39:58.630299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59844 ] 00:05:51.009 [2024-12-11 08:39:58.767888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.268 [2024-12-11 08:39:58.801829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.268 [2024-12-11 08:39:58.845721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59858 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59858 /var/tmp/spdk2.sock 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59858 ']' 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.268 08:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 [2024-12-11 08:39:59.030569] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:51.268 [2024-12-11 08:39:59.030684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:05:51.528 [2024-12-11 08:39:59.183464] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.528 [2024-12-11 08:39:59.183516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.528 [2024-12-11 08:39:59.242475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.792 [2024-12-11 08:39:59.315726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.792 08:39:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.792 08:39:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.792 08:39:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59844 00:05:51.792 08:39:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59844 00:05:51.792 08:39:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59844 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59844 ']' 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59844 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59844 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.733 killing process with pid 59844 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59844' 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59844 00:05:52.733 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59844 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59858 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59858 ']' 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59858 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59858 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.301 killing process with pid 59858 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59858' 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59858 00:05:53.301 08:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59858 00:05:53.560 00:05:53.560 real 0m2.501s 00:05:53.560 user 0m2.845s 00:05:53.560 sys 0m0.827s 00:05:53.560 08:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.560 08:40:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.560 ************************************ 00:05:53.560 END TEST non_locking_app_on_locked_coremask 00:05:53.560 ************************************ 00:05:53.560 08:40:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.560 08:40:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.560 08:40:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.560 08:40:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.560 ************************************ 00:05:53.560 START TEST locking_app_on_unlocked_coremask 00:05:53.560 ************************************ 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59912 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59912 /var/tmp/spdk.sock 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59912 ']' 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.560 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.560 [2024-12-11 08:40:01.202480] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:53.560 [2024-12-11 08:40:01.202595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59912 ] 00:05:53.819 [2024-12-11 08:40:01.349393] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.819 [2024-12-11 08:40:01.349443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.819 [2024-12-11 08:40:01.378354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.819 [2024-12-11 08:40:01.413438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59915 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59915 /var/tmp/spdk2.sock 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59915 ']' 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.819 08:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.079 [2024-12-11 08:40:01.597862] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:54.079 [2024-12-11 08:40:01.597986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:05:54.079 [2024-12-11 08:40:01.754285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.079 [2024-12-11 08:40:01.815213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.337 [2024-12-11 08:40:01.884322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.337 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.337 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.337 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59915 00:05:54.337 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59915 00:05:54.337 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.273 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59912 00:05:55.273 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59912 ']' 00:05:55.273 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59912 00:05:55.273 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59912 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.274 killing process with pid 59912 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59912' 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59912 00:05:55.274 08:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59912 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59915 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59915 ']' 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59915 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59915 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59915' 00:05:55.843 killing process with pid 59915 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59915 00:05:55.843 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59915 00:05:56.102 00:05:56.102 real 0m2.505s 00:05:56.102 user 0m2.827s 00:05:56.102 sys 0m0.843s 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 ************************************ 00:05:56.102 END TEST locking_app_on_unlocked_coremask 00:05:56.102 ************************************ 00:05:56.102 08:40:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:56.102 08:40:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.102 08:40:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.102 08:40:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 ************************************ 00:05:56.102 START TEST locking_app_on_locked_coremask 00:05:56.102 ************************************ 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59969 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59969 /var/tmp/spdk.sock 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59969 ']' 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.102 08:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 [2024-12-11 08:40:03.745477] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:56.103 [2024-12-11 08:40:03.745575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59969 ] 00:05:56.362 [2024-12-11 08:40:03.884298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.362 [2024-12-11 08:40:03.914580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.362 [2024-12-11 08:40:03.952280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59983 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59983 /var/tmp/spdk2.sock 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59983 /var/tmp/spdk2.sock 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59983 /var/tmp/spdk2.sock 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59983 ']' 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.362 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.620 [2024-12-11 08:40:04.147415] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:56.620 [2024-12-11 08:40:04.147521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59983 ] 00:05:56.620 [2024-12-11 08:40:04.309546] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59969 has claimed it. 00:05:56.620 [2024-12-11 08:40:04.309643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.188 ERROR: process (pid: 59983) is no longer running 00:05:57.188 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59983) - No such process 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59969 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.188 08:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59969 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59969 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59969 ']' 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59969 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59969 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.755 killing process with pid 59969 00:05:57.755 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59969' 00:05:57.756 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59969 00:05:57.756 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59969 00:05:58.015 00:05:58.015 real 0m1.906s 00:05:58.015 user 0m2.340s 00:05:58.015 sys 0m0.489s 00:05:58.015 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.015 08:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.015 ************************************ 00:05:58.015 END TEST locking_app_on_locked_coremask 00:05:58.015 ************************************ 00:05:58.015 08:40:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.015 08:40:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.015 08:40:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.015 08:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.015 ************************************ 00:05:58.015 START TEST locking_overlapped_coremask 00:05:58.015 ************************************ 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60023 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60023 /var/tmp/spdk.sock 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60023 ']' 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.015 08:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.015 [2024-12-11 08:40:05.721038] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:58.015 [2024-12-11 08:40:05.721153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60023 ] 00:05:58.274 [2024-12-11 08:40:05.865827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.274 [2024-12-11 08:40:05.897834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.274 [2024-12-11 08:40:05.897934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.274 [2024-12-11 08:40:05.897940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.274 [2024-12-11 08:40:05.937142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.230 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.230 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.230 08:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60041 00:05:59.230 08:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:59.230 08:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60041 /var/tmp/spdk2.sock 00:05:59.230 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60041 /var/tmp/spdk2.sock 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60041 /var/tmp/spdk2.sock 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60041 ']' 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.231 08:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.231 [2024-12-11 08:40:06.790489] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:05:59.231 [2024-12-11 08:40:06.790583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60041 ] 00:05:59.231 [2024-12-11 08:40:06.952708] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60023 has claimed it. 00:05:59.231 [2024-12-11 08:40:06.956191] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60041) - No such process 00:05:59.799 ERROR: process (pid: 60041) is no longer running 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60023 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60023 ']' 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60023 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60023 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60023' 00:05:59.799 killing process with pid 60023 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60023 00:05:59.799 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60023 00:06:00.057 00:06:00.057 real 0m2.120s 00:06:00.057 user 0m6.245s 00:06:00.057 sys 0m0.336s 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.058 ************************************ 00:06:00.058 END TEST locking_overlapped_coremask 00:06:00.058 ************************************ 00:06:00.058 08:40:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.058 08:40:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.058 08:40:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.058 08:40:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.058 ************************************ 00:06:00.058 START TEST locking_overlapped_coremask_via_rpc 00:06:00.058 ************************************ 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60081 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60081 /var/tmp/spdk.sock 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60081 ']' 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.058 08:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.317 [2024-12-11 08:40:07.874457] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:00.317 [2024-12-11 08:40:07.874544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60081 ] 00:06:00.317 [2024-12-11 08:40:08.019067] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.317 [2024-12-11 08:40:08.019117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.317 [2024-12-11 08:40:08.055729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.317 [2024-12-11 08:40:08.055823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.317 [2024-12-11 08:40:08.055831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.576 [2024-12-11 08:40:08.097876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60099 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60099 /var/tmp/spdk2.sock 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60099 ']' 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.145 08:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.404 [2024-12-11 08:40:08.919375] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:01.404 [2024-12-11 08:40:08.919466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:06:01.404 [2024-12-11 08:40:09.080588] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.404 [2024-12-11 08:40:09.080790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.404 [2024-12-11 08:40:09.153090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.404 [2024-12-11 08:40:09.156304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.404 [2024-12-11 08:40:09.156321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.663 [2024-12-11 08:40:09.240929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.230 [2024-12-11 08:40:09.943371] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60081 has claimed it. 00:06:02.230 request: 00:06:02.230 { 00:06:02.230 "method": "framework_enable_cpumask_locks", 00:06:02.230 "req_id": 1 00:06:02.230 } 00:06:02.230 Got JSON-RPC error response 00:06:02.230 response: 00:06:02.230 { 00:06:02.230 "code": -32603, 00:06:02.230 "message": "Failed to claim CPU core: 2" 00:06:02.230 } 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.230 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60081 /var/tmp/spdk.sock 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60081 ']' 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.231 08:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.489 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.489 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.489 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60099 /var/tmp/spdk2.sock 00:06:02.489 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60099 ']' 00:06:02.489 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.489 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.490 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.490 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.490 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.057 ************************************ 00:06:03.057 END TEST locking_overlapped_coremask_via_rpc 00:06:03.057 ************************************ 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.057 00:06:03.057 real 0m2.715s 00:06:03.057 user 0m1.465s 00:06:03.057 sys 0m0.183s 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.057 08:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.057 08:40:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.057 08:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60081 ]] 00:06:03.057 08:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60081 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60081 ']' 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60081 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60081 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60081' 00:06:03.057 killing process with pid 60081 00:06:03.057 08:40:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60081 00:06:03.058 08:40:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60081 00:06:03.317 08:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60099 ]] 00:06:03.317 08:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60099 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60099 ']' 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60099 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60099 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:03.317 killing process with pid 60099 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60099' 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60099 00:06:03.317 08:40:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60099 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60081 ]] 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60081 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60081 ']' 00:06:03.576 Process with pid 60081 is not found 00:06:03.576 Process with pid 60099 is not found 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60081 00:06:03.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60081) - No such process 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60081 is not found' 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60099 ]] 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60099 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60099 ']' 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60099 00:06:03.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60099) - No such process 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60099 is not found' 00:06:03.576 08:40:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.576 00:06:03.576 real 0m15.207s 00:06:03.576 user 0m30.709s 00:06:03.576 sys 0m4.238s 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.576 08:40:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.576 ************************************ 00:06:03.576 END TEST cpu_locks 00:06:03.576 ************************************ 00:06:03.576 00:06:03.576 real 0m42.168s 00:06:03.576 user 1m26.597s 00:06:03.576 sys 0m7.538s 00:06:03.576 ************************************ 00:06:03.576 END TEST event 00:06:03.576 ************************************ 00:06:03.576 08:40:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.576 08:40:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.576 08:40:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.576 08:40:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.576 08:40:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.576 08:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.576 ************************************ 00:06:03.576 START TEST thread 00:06:03.576 ************************************ 00:06:03.576 08:40:11 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.576 * Looking for test storage... 00:06:03.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:03.576 08:40:11 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.576 08:40:11 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.576 08:40:11 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.835 08:40:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.835 08:40:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.835 08:40:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.835 08:40:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.835 08:40:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.835 08:40:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.835 08:40:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.835 08:40:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.835 08:40:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.835 08:40:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.835 08:40:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.835 08:40:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:03.835 08:40:11 thread -- scripts/common.sh@345 -- # : 1 00:06:03.835 08:40:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.835 08:40:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.835 08:40:11 thread -- scripts/common.sh@365 -- # decimal 1 00:06:03.835 08:40:11 thread -- scripts/common.sh@353 -- # local d=1 00:06:03.835 08:40:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.835 08:40:11 thread -- scripts/common.sh@355 -- # echo 1 00:06:03.835 08:40:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.835 08:40:11 thread -- scripts/common.sh@366 -- # decimal 2 00:06:03.835 08:40:11 thread -- scripts/common.sh@353 -- # local d=2 00:06:03.835 08:40:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.835 08:40:11 thread -- scripts/common.sh@355 -- # echo 2 00:06:03.835 08:40:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.835 08:40:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.835 08:40:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.835 08:40:11 thread -- scripts/common.sh@368 -- # return 0 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.835 --rc genhtml_branch_coverage=1 00:06:03.835 --rc genhtml_function_coverage=1 00:06:03.835 --rc genhtml_legend=1 00:06:03.835 --rc geninfo_all_blocks=1 00:06:03.835 --rc geninfo_unexecuted_blocks=1 00:06:03.835 00:06:03.835 ' 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.835 --rc genhtml_branch_coverage=1 00:06:03.835 --rc genhtml_function_coverage=1 00:06:03.835 --rc genhtml_legend=1 00:06:03.835 --rc geninfo_all_blocks=1 00:06:03.835 --rc geninfo_unexecuted_blocks=1 00:06:03.835 00:06:03.835 ' 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.835 --rc genhtml_branch_coverage=1 00:06:03.835 --rc genhtml_function_coverage=1 00:06:03.835 --rc genhtml_legend=1 00:06:03.835 --rc geninfo_all_blocks=1 00:06:03.835 --rc geninfo_unexecuted_blocks=1 00:06:03.835 00:06:03.835 ' 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.835 --rc genhtml_branch_coverage=1 00:06:03.835 --rc genhtml_function_coverage=1 00:06:03.835 --rc genhtml_legend=1 00:06:03.835 --rc geninfo_all_blocks=1 00:06:03.835 --rc geninfo_unexecuted_blocks=1 00:06:03.835 00:06:03.835 ' 00:06:03.835 08:40:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.835 08:40:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.835 ************************************ 00:06:03.835 START TEST thread_poller_perf 00:06:03.835 ************************************ 00:06:03.835 08:40:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.835 [2024-12-11 08:40:11.474639] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:03.835 [2024-12-11 08:40:11.474855] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60235 ] 00:06:04.094 [2024-12-11 08:40:11.620604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.094 [2024-12-11 08:40:11.648580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.094 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:05.031 [2024-12-11T08:40:12.805Z] ====================================== 00:06:05.031 [2024-12-11T08:40:12.805Z] busy:2208355306 (cyc) 00:06:05.031 [2024-12-11T08:40:12.805Z] total_run_count: 340000 00:06:05.031 [2024-12-11T08:40:12.805Z] tsc_hz: 2200000000 (cyc) 00:06:05.031 [2024-12-11T08:40:12.805Z] ====================================== 00:06:05.031 [2024-12-11T08:40:12.805Z] poller_cost: 6495 (cyc), 2952 (nsec) 00:06:05.031 00:06:05.031 real 0m1.236s 00:06:05.031 user 0m1.098s 00:06:05.031 sys 0m0.031s 00:06:05.031 08:40:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.031 ************************************ 00:06:05.031 END TEST thread_poller_perf 00:06:05.031 ************************************ 00:06:05.031 08:40:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.031 08:40:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.031 08:40:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:05.031 08:40:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.031 08:40:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.031 ************************************ 00:06:05.031 START TEST thread_poller_perf 00:06:05.031 ************************************ 00:06:05.031 08:40:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.031 [2024-12-11 08:40:12.762644] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:05.031 [2024-12-11 08:40:12.762723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:06:05.290 [2024-12-11 08:40:12.892590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.290 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.290 [2024-12-11 08:40:12.920205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.228 [2024-12-11T08:40:14.002Z] ====================================== 00:06:06.228 [2024-12-11T08:40:14.002Z] busy:2201846102 (cyc) 00:06:06.228 [2024-12-11T08:40:14.002Z] total_run_count: 4448000 00:06:06.228 [2024-12-11T08:40:14.002Z] tsc_hz: 2200000000 (cyc) 00:06:06.228 [2024-12-11T08:40:14.002Z] ====================================== 00:06:06.228 [2024-12-11T08:40:14.002Z] poller_cost: 495 (cyc), 225 (nsec) 00:06:06.228 00:06:06.228 real 0m1.219s 00:06:06.228 user 0m1.084s 00:06:06.228 sys 0m0.030s 00:06:06.228 08:40:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.228 ************************************ 00:06:06.228 END TEST thread_poller_perf 00:06:06.228 ************************************ 00:06:06.228 08:40:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.487 08:40:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:06.487 00:06:06.487 real 0m2.780s 00:06:06.487 user 0m2.364s 00:06:06.487 sys 0m0.193s 00:06:06.487 ************************************ 00:06:06.487 END TEST thread 00:06:06.487 ************************************ 00:06:06.487 08:40:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.487 08:40:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.487 08:40:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:06.487 08:40:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.487 08:40:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.487 08:40:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.487 08:40:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.487 ************************************ 00:06:06.487 START TEST app_cmdline 00:06:06.487 ************************************ 00:06:06.487 08:40:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.487 * Looking for test storage... 00:06:06.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.487 08:40:14 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.487 08:40:14 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.487 08:40:14 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.487 08:40:14 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.487 08:40:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.487 08:40:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.487 08:40:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.487 08:40:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.487 08:40:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.488 08:40:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.488 --rc genhtml_legend=1 00:06:06.488 --rc geninfo_all_blocks=1 00:06:06.488 --rc geninfo_unexecuted_blocks=1 00:06:06.488 00:06:06.488 ' 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.488 --rc genhtml_legend=1 00:06:06.488 --rc geninfo_all_blocks=1 00:06:06.488 --rc geninfo_unexecuted_blocks=1 00:06:06.488 00:06:06.488 ' 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.488 --rc genhtml_legend=1 00:06:06.488 --rc geninfo_all_blocks=1 00:06:06.488 --rc geninfo_unexecuted_blocks=1 00:06:06.488 00:06:06.488 ' 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.488 --rc genhtml_branch_coverage=1 00:06:06.488 --rc genhtml_function_coverage=1 00:06:06.488 --rc genhtml_legend=1 00:06:06.488 --rc geninfo_all_blocks=1 00:06:06.488 --rc geninfo_unexecuted_blocks=1 00:06:06.488 00:06:06.488 ' 00:06:06.488 08:40:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.488 08:40:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60348 00:06:06.488 08:40:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.488 08:40:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60348 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60348 ']' 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.488 08:40:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.747 [2024-12-11 08:40:14.313213] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:06.747 [2024-12-11 08:40:14.313308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60348 ] 00:06:06.747 [2024-12-11 08:40:14.456540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.747 [2024-12-11 08:40:14.485590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.006 [2024-12-11 08:40:14.522124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.007 08:40:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.007 08:40:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:07.007 08:40:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:07.266 { 00:06:07.266 "version": "SPDK v25.01-pre git sha1 97b0ef63e", 00:06:07.266 "fields": { 00:06:07.266 "major": 25, 00:06:07.266 "minor": 1, 00:06:07.266 "patch": 0, 00:06:07.266 "suffix": "-pre", 00:06:07.266 "commit": "97b0ef63e" 00:06:07.266 } 00:06:07.266 } 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.266 08:40:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:07.266 08:40:14 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.530 request: 00:06:07.530 { 00:06:07.530 "method": "env_dpdk_get_mem_stats", 00:06:07.530 "req_id": 1 00:06:07.530 } 00:06:07.530 Got JSON-RPC error response 00:06:07.530 response: 00:06:07.530 { 00:06:07.530 "code": -32601, 00:06:07.530 "message": "Method not found" 00:06:07.530 } 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.530 08:40:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60348 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60348 ']' 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60348 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.530 08:40:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60348 00:06:07.793 killing process with pid 60348 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60348' 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 60348 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 60348 00:06:07.793 00:06:07.793 real 0m1.501s 00:06:07.793 user 0m2.023s 00:06:07.793 sys 0m0.350s 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.793 08:40:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.793 ************************************ 00:06:07.793 END TEST app_cmdline 00:06:07.793 ************************************ 00:06:08.052 08:40:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.052 08:40:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.052 08:40:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.052 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.052 ************************************ 00:06:08.052 START TEST version 00:06:08.052 ************************************ 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.052 * Looking for test storage... 00:06:08.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.052 08:40:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.052 08:40:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.052 08:40:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.052 08:40:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.052 08:40:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.052 08:40:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.052 08:40:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.052 08:40:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.052 08:40:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.052 08:40:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.052 08:40:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.052 08:40:15 version -- scripts/common.sh@344 -- # case "$op" in 00:06:08.052 08:40:15 version -- scripts/common.sh@345 -- # : 1 00:06:08.052 08:40:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.052 08:40:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.052 08:40:15 version -- scripts/common.sh@365 -- # decimal 1 00:06:08.052 08:40:15 version -- scripts/common.sh@353 -- # local d=1 00:06:08.052 08:40:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.052 08:40:15 version -- scripts/common.sh@355 -- # echo 1 00:06:08.052 08:40:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.052 08:40:15 version -- scripts/common.sh@366 -- # decimal 2 00:06:08.052 08:40:15 version -- scripts/common.sh@353 -- # local d=2 00:06:08.052 08:40:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.052 08:40:15 version -- scripts/common.sh@355 -- # echo 2 00:06:08.052 08:40:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.052 08:40:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.052 08:40:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.052 08:40:15 version -- scripts/common.sh@368 -- # return 0 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 08:40:15 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 08:40:15 version -- app/version.sh@17 -- # get_header_version major 00:06:08.052 08:40:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.052 08:40:15 version -- app/version.sh@14 -- # cut -f2 00:06:08.052 08:40:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.052 08:40:15 version -- app/version.sh@17 -- # major=25 00:06:08.052 08:40:15 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.052 08:40:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.052 08:40:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.052 08:40:15 version -- app/version.sh@14 -- # cut -f2 00:06:08.052 08:40:15 version -- app/version.sh@18 -- # minor=1 00:06:08.052 08:40:15 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.052 08:40:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.052 08:40:15 version -- app/version.sh@14 -- # cut -f2 00:06:08.052 08:40:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.312 08:40:15 version -- app/version.sh@19 -- # patch=0 00:06:08.312 08:40:15 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.312 08:40:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.312 08:40:15 version -- app/version.sh@14 -- # cut -f2 00:06:08.312 08:40:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.312 08:40:15 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.312 08:40:15 version -- app/version.sh@22 -- # version=25.1 00:06:08.312 08:40:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.312 08:40:15 version -- app/version.sh@28 -- # version=25.1rc0 00:06:08.312 08:40:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.312 08:40:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.312 08:40:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:08.312 08:40:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:08.312 00:06:08.312 real 0m0.255s 00:06:08.312 user 0m0.164s 00:06:08.312 sys 0m0.127s 00:06:08.312 ************************************ 00:06:08.312 END TEST version 00:06:08.312 ************************************ 00:06:08.312 08:40:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.312 08:40:15 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.312 08:40:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:08.312 08:40:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:08.312 08:40:15 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.312 08:40:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.312 08:40:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.312 08:40:15 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:08.312 08:40:15 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:08.312 08:40:15 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:08.312 08:40:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.312 08:40:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.312 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.312 ************************************ 00:06:08.312 START TEST spdk_dd 00:06:08.312 ************************************ 00:06:08.312 08:40:15 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:08.312 * Looking for test storage... 00:06:08.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.312 08:40:16 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.312 08:40:16 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.312 08:40:16 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.572 08:40:16 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:08.572 08:40:16 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.572 08:40:16 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.572 --rc genhtml_branch_coverage=1 00:06:08.572 --rc genhtml_function_coverage=1 00:06:08.572 --rc genhtml_legend=1 00:06:08.572 --rc geninfo_all_blocks=1 00:06:08.572 --rc geninfo_unexecuted_blocks=1 00:06:08.572 00:06:08.572 ' 00:06:08.572 08:40:16 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.572 --rc genhtml_branch_coverage=1 00:06:08.572 --rc genhtml_function_coverage=1 00:06:08.572 --rc genhtml_legend=1 00:06:08.572 --rc geninfo_all_blocks=1 00:06:08.572 --rc geninfo_unexecuted_blocks=1 00:06:08.572 00:06:08.572 ' 00:06:08.572 08:40:16 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.572 --rc genhtml_branch_coverage=1 00:06:08.572 --rc genhtml_function_coverage=1 00:06:08.572 --rc genhtml_legend=1 00:06:08.572 --rc geninfo_all_blocks=1 00:06:08.572 --rc geninfo_unexecuted_blocks=1 00:06:08.572 00:06:08.572 ' 00:06:08.572 08:40:16 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.572 --rc genhtml_branch_coverage=1 00:06:08.572 --rc genhtml_function_coverage=1 00:06:08.572 --rc genhtml_legend=1 00:06:08.572 --rc geninfo_all_blocks=1 00:06:08.572 --rc geninfo_unexecuted_blocks=1 00:06:08.572 00:06:08.572 ' 00:06:08.572 08:40:16 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.572 08:40:16 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.572 08:40:16 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.572 08:40:16 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.572 08:40:16 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.572 08:40:16 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:08.572 08:40:16 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.572 08:40:16 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.833 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.833 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.833 08:40:16 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:08.833 08:40:16 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:08.833 08:40:16 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.833 08:40:16 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:08.833 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:08.834 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:08.835 * spdk_dd linked to liburing 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:08.835 08:40:16 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:08.835 08:40:16 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:08.835 08:40:16 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:08.835 08:40:16 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:08.835 08:40:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:08.835 08:40:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.835 08:40:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:09.095 ************************************ 00:06:09.095 START TEST spdk_dd_basic_rw 00:06:09.095 ************************************ 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:09.095 * Looking for test storage... 00:06:09.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.095 --rc genhtml_branch_coverage=1 00:06:09.095 --rc genhtml_function_coverage=1 00:06:09.095 --rc genhtml_legend=1 00:06:09.095 --rc geninfo_all_blocks=1 00:06:09.095 --rc geninfo_unexecuted_blocks=1 00:06:09.095 00:06:09.095 ' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.095 --rc genhtml_branch_coverage=1 00:06:09.095 --rc genhtml_function_coverage=1 00:06:09.095 --rc genhtml_legend=1 00:06:09.095 --rc geninfo_all_blocks=1 00:06:09.095 --rc geninfo_unexecuted_blocks=1 00:06:09.095 00:06:09.095 ' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.095 --rc genhtml_branch_coverage=1 00:06:09.095 --rc genhtml_function_coverage=1 00:06:09.095 --rc genhtml_legend=1 00:06:09.095 --rc geninfo_all_blocks=1 00:06:09.095 --rc geninfo_unexecuted_blocks=1 00:06:09.095 00:06:09.095 ' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.095 --rc genhtml_branch_coverage=1 00:06:09.095 --rc genhtml_function_coverage=1 00:06:09.095 --rc genhtml_legend=1 00:06:09.095 --rc geninfo_all_blocks=1 00:06:09.095 --rc geninfo_unexecuted_blocks=1 00:06:09.095 00:06:09.095 ' 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.095 08:40:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:09.096 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:09.357 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:09.357 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.358 08:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.358 ************************************ 00:06:09.358 START TEST dd_bs_lt_native_bs 00:06:09.358 ************************************ 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.358 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.358 { 00:06:09.358 "subsystems": [ 00:06:09.358 { 00:06:09.358 "subsystem": "bdev", 00:06:09.358 "config": [ 00:06:09.359 { 00:06:09.359 "params": { 00:06:09.359 "trtype": "pcie", 00:06:09.359 "traddr": "0000:00:10.0", 00:06:09.359 "name": "Nvme0" 00:06:09.359 }, 00:06:09.359 "method": "bdev_nvme_attach_controller" 00:06:09.359 }, 00:06:09.359 { 00:06:09.359 "method": "bdev_wait_for_examine" 00:06:09.359 } 00:06:09.359 ] 00:06:09.359 } 00:06:09.359 ] 00:06:09.359 } 00:06:09.359 [2024-12-11 08:40:17.069526] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:09.359 [2024-12-11 08:40:17.069627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60686 ] 00:06:09.618 [2024-12-11 08:40:17.222715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.618 [2024-12-11 08:40:17.263421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.618 [2024-12-11 08:40:17.300176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.878 [2024-12-11 08:40:17.397983] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:09.878 [2024-12-11 08:40:17.398055] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.878 [2024-12-11 08:40:17.485150] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.878 00:06:09.878 real 0m0.558s 00:06:09.878 user 0m0.399s 00:06:09.878 sys 0m0.117s 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:09.878 ************************************ 00:06:09.878 END TEST dd_bs_lt_native_bs 00:06:09.878 ************************************ 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.878 ************************************ 00:06:09.878 START TEST dd_rw 00:06:09.878 ************************************ 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:09.878 08:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.815 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:10.815 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:10.815 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.815 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.815 [2024-12-11 08:40:18.288288] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:10.815 [2024-12-11 08:40:18.288385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60723 ] 00:06:10.815 { 00:06:10.815 "subsystems": [ 00:06:10.815 { 00:06:10.815 "subsystem": "bdev", 00:06:10.815 "config": [ 00:06:10.815 { 00:06:10.815 "params": { 00:06:10.815 "trtype": "pcie", 00:06:10.815 "traddr": "0000:00:10.0", 00:06:10.815 "name": "Nvme0" 00:06:10.815 }, 00:06:10.815 "method": "bdev_nvme_attach_controller" 00:06:10.815 }, 00:06:10.815 { 00:06:10.815 "method": "bdev_wait_for_examine" 00:06:10.815 } 00:06:10.815 ] 00:06:10.815 } 00:06:10.815 ] 00:06:10.815 } 00:06:10.815 [2024-12-11 08:40:18.438462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.815 [2024-12-11 08:40:18.478639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.815 [2024-12-11 08:40:18.514481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.075  [2024-12-11T08:40:18.849Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:11.075 00:06:11.075 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:11.075 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.075 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.075 08:40:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.075 [2024-12-11 08:40:18.782083] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:11.075 [2024-12-11 08:40:18.782212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60736 ] 00:06:11.075 { 00:06:11.075 "subsystems": [ 00:06:11.075 { 00:06:11.075 "subsystem": "bdev", 00:06:11.075 "config": [ 00:06:11.075 { 00:06:11.075 "params": { 00:06:11.075 "trtype": "pcie", 00:06:11.075 "traddr": "0000:00:10.0", 00:06:11.075 "name": "Nvme0" 00:06:11.075 }, 00:06:11.075 "method": "bdev_nvme_attach_controller" 00:06:11.075 }, 00:06:11.075 { 00:06:11.075 "method": "bdev_wait_for_examine" 00:06:11.075 } 00:06:11.075 ] 00:06:11.075 } 00:06:11.075 ] 00:06:11.075 } 00:06:11.334 [2024-12-11 08:40:18.919277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.334 [2024-12-11 08:40:18.947929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.334 [2024-12-11 08:40:18.977479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.334  [2024-12-11T08:40:19.367Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:11.593 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.593 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.593 { 00:06:11.593 "subsystems": [ 00:06:11.593 { 00:06:11.593 "subsystem": "bdev", 00:06:11.593 "config": [ 00:06:11.593 { 00:06:11.593 "params": { 00:06:11.593 "trtype": "pcie", 00:06:11.593 "traddr": "0000:00:10.0", 00:06:11.593 "name": "Nvme0" 00:06:11.593 }, 00:06:11.593 "method": "bdev_nvme_attach_controller" 00:06:11.593 }, 00:06:11.593 { 00:06:11.593 "method": "bdev_wait_for_examine" 00:06:11.593 } 00:06:11.593 ] 00:06:11.593 } 00:06:11.593 ] 00:06:11.593 } 00:06:11.593 [2024-12-11 08:40:19.270715] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:11.593 [2024-12-11 08:40:19.270841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:06:11.853 [2024-12-11 08:40:19.420334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.853 [2024-12-11 08:40:19.455437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.853 [2024-12-11 08:40:19.487878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.853  [2024-12-11T08:40:19.886Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:12.112 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.112 08:40:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.681 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:12.681 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.681 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.681 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.681 [2024-12-11 08:40:20.419407] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:12.681 [2024-12-11 08:40:20.419496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60771 ] 00:06:12.681 { 00:06:12.681 "subsystems": [ 00:06:12.681 { 00:06:12.681 "subsystem": "bdev", 00:06:12.681 "config": [ 00:06:12.681 { 00:06:12.681 "params": { 00:06:12.681 "trtype": "pcie", 00:06:12.681 "traddr": "0000:00:10.0", 00:06:12.681 "name": "Nvme0" 00:06:12.681 }, 00:06:12.681 "method": "bdev_nvme_attach_controller" 00:06:12.681 }, 00:06:12.681 { 00:06:12.681 "method": "bdev_wait_for_examine" 00:06:12.681 } 00:06:12.681 ] 00:06:12.681 } 00:06:12.681 ] 00:06:12.681 } 00:06:12.940 [2024-12-11 08:40:20.567153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.940 [2024-12-11 08:40:20.601694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.940 [2024-12-11 08:40:20.636160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.199  [2024-12-11T08:40:20.973Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:13.199 00:06:13.199 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:13.199 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.199 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.199 08:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.199 [2024-12-11 08:40:20.918350] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:13.199 [2024-12-11 08:40:20.918454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60784 ] 00:06:13.199 { 00:06:13.199 "subsystems": [ 00:06:13.199 { 00:06:13.199 "subsystem": "bdev", 00:06:13.199 "config": [ 00:06:13.199 { 00:06:13.199 "params": { 00:06:13.199 "trtype": "pcie", 00:06:13.199 "traddr": "0000:00:10.0", 00:06:13.199 "name": "Nvme0" 00:06:13.199 }, 00:06:13.199 "method": "bdev_nvme_attach_controller" 00:06:13.199 }, 00:06:13.199 { 00:06:13.199 "method": "bdev_wait_for_examine" 00:06:13.199 } 00:06:13.199 ] 00:06:13.199 } 00:06:13.199 ] 00:06:13.199 } 00:06:13.459 [2024-12-11 08:40:21.067346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.459 [2024-12-11 08:40:21.107127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.459 [2024-12-11 08:40:21.143273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.718  [2024-12-11T08:40:21.492Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:13.718 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.718 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.718 { 00:06:13.718 "subsystems": [ 00:06:13.718 { 00:06:13.718 "subsystem": "bdev", 00:06:13.718 "config": [ 00:06:13.718 { 00:06:13.718 "params": { 00:06:13.718 "trtype": "pcie", 00:06:13.718 "traddr": "0000:00:10.0", 00:06:13.718 "name": "Nvme0" 00:06:13.718 }, 00:06:13.718 "method": "bdev_nvme_attach_controller" 00:06:13.718 }, 00:06:13.718 { 00:06:13.718 "method": "bdev_wait_for_examine" 00:06:13.718 } 00:06:13.718 ] 00:06:13.718 } 00:06:13.718 ] 00:06:13.718 } 00:06:13.718 [2024-12-11 08:40:21.436293] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:13.718 [2024-12-11 08:40:21.436384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:06:13.977 [2024-12-11 08:40:21.582388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.977 [2024-12-11 08:40:21.618269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.977 [2024-12-11 08:40:21.651688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.977  [2024-12-11T08:40:22.010Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.236 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:14.236 08:40:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.803 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:14.803 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.803 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.803 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.803 [2024-12-11 08:40:22.472480] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:14.803 [2024-12-11 08:40:22.472572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60819 ] 00:06:14.803 { 00:06:14.803 "subsystems": [ 00:06:14.803 { 00:06:14.803 "subsystem": "bdev", 00:06:14.803 "config": [ 00:06:14.803 { 00:06:14.803 "params": { 00:06:14.803 "trtype": "pcie", 00:06:14.803 "traddr": "0000:00:10.0", 00:06:14.803 "name": "Nvme0" 00:06:14.803 }, 00:06:14.803 "method": "bdev_nvme_attach_controller" 00:06:14.803 }, 00:06:14.803 { 00:06:14.803 "method": "bdev_wait_for_examine" 00:06:14.803 } 00:06:14.803 ] 00:06:14.803 } 00:06:14.803 ] 00:06:14.803 } 00:06:15.062 [2024-12-11 08:40:22.616469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.063 [2024-12-11 08:40:22.645539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.063 [2024-12-11 08:40:22.674881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.063  [2024-12-11T08:40:23.096Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:15.322 00:06:15.322 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:15.322 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:15.322 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.322 08:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.322 [2024-12-11 08:40:22.938040] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:15.322 [2024-12-11 08:40:22.938148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:06:15.322 { 00:06:15.322 "subsystems": [ 00:06:15.322 { 00:06:15.322 "subsystem": "bdev", 00:06:15.322 "config": [ 00:06:15.322 { 00:06:15.322 "params": { 00:06:15.322 "trtype": "pcie", 00:06:15.322 "traddr": "0000:00:10.0", 00:06:15.322 "name": "Nvme0" 00:06:15.322 }, 00:06:15.322 "method": "bdev_nvme_attach_controller" 00:06:15.322 }, 00:06:15.322 { 00:06:15.322 "method": "bdev_wait_for_examine" 00:06:15.322 } 00:06:15.322 ] 00:06:15.322 } 00:06:15.322 ] 00:06:15.322 } 00:06:15.322 [2024-12-11 08:40:23.084267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.581 [2024-12-11 08:40:23.112161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.581 [2024-12-11 08:40:23.138728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.581  [2024-12-11T08:40:23.355Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:15.581 00:06:15.581 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.840 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.840 [2024-12-11 08:40:23.412495] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:15.840 [2024-12-11 08:40:23.412590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:06:15.840 { 00:06:15.840 "subsystems": [ 00:06:15.840 { 00:06:15.840 "subsystem": "bdev", 00:06:15.840 "config": [ 00:06:15.840 { 00:06:15.840 "params": { 00:06:15.840 "trtype": "pcie", 00:06:15.840 "traddr": "0000:00:10.0", 00:06:15.840 "name": "Nvme0" 00:06:15.840 }, 00:06:15.840 "method": "bdev_nvme_attach_controller" 00:06:15.840 }, 00:06:15.840 { 00:06:15.840 "method": "bdev_wait_for_examine" 00:06:15.840 } 00:06:15.840 ] 00:06:15.840 } 00:06:15.840 ] 00:06:15.840 } 00:06:15.840 [2024-12-11 08:40:23.556411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.840 [2024-12-11 08:40:23.584953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.098 [2024-12-11 08:40:23.614816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.098  [2024-12-11T08:40:23.872Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.098 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:16.098 08:40:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.669 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:16.669 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:16.669 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.669 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.947 [2024-12-11 08:40:24.444689] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:16.947 [2024-12-11 08:40:24.444793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60867 ] 00:06:16.947 { 00:06:16.947 "subsystems": [ 00:06:16.947 { 00:06:16.947 "subsystem": "bdev", 00:06:16.947 "config": [ 00:06:16.947 { 00:06:16.947 "params": { 00:06:16.947 "trtype": "pcie", 00:06:16.947 "traddr": "0000:00:10.0", 00:06:16.947 "name": "Nvme0" 00:06:16.947 }, 00:06:16.947 "method": "bdev_nvme_attach_controller" 00:06:16.947 }, 00:06:16.947 { 00:06:16.947 "method": "bdev_wait_for_examine" 00:06:16.947 } 00:06:16.947 ] 00:06:16.947 } 00:06:16.947 ] 00:06:16.947 } 00:06:16.947 [2024-12-11 08:40:24.590570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.947 [2024-12-11 08:40:24.622826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.947 [2024-12-11 08:40:24.653803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.238  [2024-12-11T08:40:25.012Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:17.238 00:06:17.238 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:17.238 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.238 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.238 08:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.238 { 00:06:17.238 "subsystems": [ 00:06:17.238 { 00:06:17.238 "subsystem": "bdev", 00:06:17.238 "config": [ 00:06:17.238 { 00:06:17.238 "params": { 00:06:17.238 "trtype": "pcie", 00:06:17.238 "traddr": "0000:00:10.0", 00:06:17.238 "name": "Nvme0" 00:06:17.238 }, 00:06:17.238 "method": "bdev_nvme_attach_controller" 00:06:17.238 }, 00:06:17.238 { 00:06:17.238 "method": "bdev_wait_for_examine" 00:06:17.238 } 00:06:17.238 ] 00:06:17.238 } 00:06:17.238 ] 00:06:17.238 } 00:06:17.238 [2024-12-11 08:40:24.921428] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:17.238 [2024-12-11 08:40:24.921520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60875 ] 00:06:17.506 [2024-12-11 08:40:25.062115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.506 [2024-12-11 08:40:25.089764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.506 [2024-12-11 08:40:25.117328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.506  [2024-12-11T08:40:25.540Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:17.766 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.766 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.766 [2024-12-11 08:40:25.393611] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:17.766 [2024-12-11 08:40:25.393707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:06:17.766 { 00:06:17.766 "subsystems": [ 00:06:17.766 { 00:06:17.766 "subsystem": "bdev", 00:06:17.766 "config": [ 00:06:17.766 { 00:06:17.766 "params": { 00:06:17.766 "trtype": "pcie", 00:06:17.766 "traddr": "0000:00:10.0", 00:06:17.766 "name": "Nvme0" 00:06:17.766 }, 00:06:17.766 "method": "bdev_nvme_attach_controller" 00:06:17.766 }, 00:06:17.766 { 00:06:17.766 "method": "bdev_wait_for_examine" 00:06:17.766 } 00:06:17.766 ] 00:06:17.766 } 00:06:17.766 ] 00:06:17.766 } 00:06:17.766 [2024-12-11 08:40:25.531645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.025 [2024-12-11 08:40:25.578878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.025 [2024-12-11 08:40:25.608830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.025  [2024-12-11T08:40:26.057Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:18.283 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:18.283 08:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.542 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:18.542 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:18.542 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.542 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 [2024-12-11 08:40:26.359242] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:18.800 [2024-12-11 08:40:26.359853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60915 ] 00:06:18.800 { 00:06:18.800 "subsystems": [ 00:06:18.800 { 00:06:18.800 "subsystem": "bdev", 00:06:18.800 "config": [ 00:06:18.800 { 00:06:18.800 "params": { 00:06:18.800 "trtype": "pcie", 00:06:18.800 "traddr": "0000:00:10.0", 00:06:18.800 "name": "Nvme0" 00:06:18.800 }, 00:06:18.800 "method": "bdev_nvme_attach_controller" 00:06:18.800 }, 00:06:18.800 { 00:06:18.800 "method": "bdev_wait_for_examine" 00:06:18.800 } 00:06:18.800 ] 00:06:18.800 } 00:06:18.800 ] 00:06:18.800 } 00:06:18.800 [2024-12-11 08:40:26.504639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.800 [2024-12-11 08:40:26.533968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.800 [2024-12-11 08:40:26.561358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.058  [2024-12-11T08:40:26.832Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:19.058 00:06:19.058 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:19.058 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:19.058 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.058 08:40:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.058 [2024-12-11 08:40:26.826066] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:19.058 [2024-12-11 08:40:26.826176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60923 ] 00:06:19.058 { 00:06:19.058 "subsystems": [ 00:06:19.058 { 00:06:19.058 "subsystem": "bdev", 00:06:19.058 "config": [ 00:06:19.058 { 00:06:19.058 "params": { 00:06:19.058 "trtype": "pcie", 00:06:19.058 "traddr": "0000:00:10.0", 00:06:19.058 "name": "Nvme0" 00:06:19.058 }, 00:06:19.059 "method": "bdev_nvme_attach_controller" 00:06:19.059 }, 00:06:19.059 { 00:06:19.059 "method": "bdev_wait_for_examine" 00:06:19.059 } 00:06:19.059 ] 00:06:19.059 } 00:06:19.059 ] 00:06:19.059 } 00:06:19.316 [2024-12-11 08:40:26.970640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.316 [2024-12-11 08:40:27.003412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.316 [2024-12-11 08:40:27.031082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.575  [2024-12-11T08:40:27.349Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:19.575 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.575 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.575 [2024-12-11 08:40:27.298569] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:19.575 [2024-12-11 08:40:27.299104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60938 ] 00:06:19.575 { 00:06:19.575 "subsystems": [ 00:06:19.575 { 00:06:19.575 "subsystem": "bdev", 00:06:19.575 "config": [ 00:06:19.575 { 00:06:19.575 "params": { 00:06:19.575 "trtype": "pcie", 00:06:19.575 "traddr": "0000:00:10.0", 00:06:19.575 "name": "Nvme0" 00:06:19.575 }, 00:06:19.575 "method": "bdev_nvme_attach_controller" 00:06:19.575 }, 00:06:19.575 { 00:06:19.575 "method": "bdev_wait_for_examine" 00:06:19.575 } 00:06:19.575 ] 00:06:19.575 } 00:06:19.575 ] 00:06:19.575 } 00:06:19.834 [2024-12-11 08:40:27.445527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.834 [2024-12-11 08:40:27.476954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.834 [2024-12-11 08:40:27.508522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.834  [2024-12-11T08:40:27.867Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:20.093 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:20.093 08:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.662 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:20.662 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:20.662 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.662 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.662 [2024-12-11 08:40:28.276492] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:20.662 [2024-12-11 08:40:28.276628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:06:20.662 { 00:06:20.662 "subsystems": [ 00:06:20.662 { 00:06:20.662 "subsystem": "bdev", 00:06:20.662 "config": [ 00:06:20.662 { 00:06:20.662 "params": { 00:06:20.662 "trtype": "pcie", 00:06:20.662 "traddr": "0000:00:10.0", 00:06:20.662 "name": "Nvme0" 00:06:20.662 }, 00:06:20.662 "method": "bdev_nvme_attach_controller" 00:06:20.662 }, 00:06:20.662 { 00:06:20.662 "method": "bdev_wait_for_examine" 00:06:20.662 } 00:06:20.662 ] 00:06:20.662 } 00:06:20.662 ] 00:06:20.662 } 00:06:20.662 [2024-12-11 08:40:28.422582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.921 [2024-12-11 08:40:28.456529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.921 [2024-12-11 08:40:28.489541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.921  [2024-12-11T08:40:28.954Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:21.180 00:06:21.180 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:21.180 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:21.180 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.180 08:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.180 { 00:06:21.180 "subsystems": [ 00:06:21.180 { 00:06:21.180 "subsystem": "bdev", 00:06:21.180 "config": [ 00:06:21.180 { 00:06:21.180 "params": { 00:06:21.180 "trtype": "pcie", 00:06:21.180 "traddr": "0000:00:10.0", 00:06:21.180 "name": "Nvme0" 00:06:21.180 }, 00:06:21.180 "method": "bdev_nvme_attach_controller" 00:06:21.180 }, 00:06:21.180 { 00:06:21.180 "method": "bdev_wait_for_examine" 00:06:21.180 } 00:06:21.180 ] 00:06:21.180 } 00:06:21.180 ] 00:06:21.180 } 00:06:21.180 [2024-12-11 08:40:28.771894] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:21.180 [2024-12-11 08:40:28.771992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60971 ] 00:06:21.180 [2024-12-11 08:40:28.916753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.180 [2024-12-11 08:40:28.946622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.439 [2024-12-11 08:40:28.977521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.439  [2024-12-11T08:40:29.213Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:21.439 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.439 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.698 [2024-12-11 08:40:29.256833] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:21.698 [2024-12-11 08:40:29.256931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:06:21.698 { 00:06:21.698 "subsystems": [ 00:06:21.698 { 00:06:21.698 "subsystem": "bdev", 00:06:21.698 "config": [ 00:06:21.698 { 00:06:21.698 "params": { 00:06:21.698 "trtype": "pcie", 00:06:21.698 "traddr": "0000:00:10.0", 00:06:21.698 "name": "Nvme0" 00:06:21.698 }, 00:06:21.698 "method": "bdev_nvme_attach_controller" 00:06:21.698 }, 00:06:21.698 { 00:06:21.698 "method": "bdev_wait_for_examine" 00:06:21.698 } 00:06:21.698 ] 00:06:21.698 } 00:06:21.698 ] 00:06:21.698 } 00:06:21.698 [2024-12-11 08:40:29.401473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.698 [2024-12-11 08:40:29.432819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.698 [2024-12-11 08:40:29.463517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.957  [2024-12-11T08:40:29.731Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.957 00:06:21.957 00:06:21.957 real 0m12.084s 00:06:21.957 user 0m9.044s 00:06:21.957 sys 0m3.705s 00:06:21.957 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.957 ************************************ 00:06:21.957 END TEST dd_rw 00:06:21.957 ************************************ 00:06:21.957 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.216 ************************************ 00:06:22.216 START TEST dd_rw_offset 00:06:22.216 ************************************ 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=3d1np1ztitevzak44xo48dz9a3mlf30g6l3lrl4dc9abco1vw7xlz758ovy6z01c78rghvmsg0hwfgx1t6q2a25bzn4l687cjs82gglkx3726uutissvmyoub0n2utulzdbo7zy83vjlgoqsczts8wuz3e2u9zedismgmrvl17ybbtkk6488r503ruytfsilxbqnudfcujdwmfom6lzs1g3ml5jm7yzwkpwfy70impr6ogoad704789kqgvs2bfrlzp7vzbsonxkp4p3r1c0sg551zhn5v3tb4r2lo470bd5zytmua7v55orikyjjlcjx8ryijghyebsnsu5ktmvha76beu0rwuab725rdt1w59asfrwfddqbnp357bsx0xbegf5trbwdodkl6phnabstsu2pw60n19m4ipc0d3hcm4k34hhwpw3wvi6pp83fk9no6ufq7m4p2ou0zj7d5tttysuk3ryiluvix12q6wcgiy40z6lpydmkvl5rlflxqd43tlk5e8u97g7atlsvq7bnezkm33k3h58fb6f13gkw165bctbn5favigrkz8ww63st29948k9thmwyix9ndy5ag1dlkyeuep1uvfi0oruomhnpodjrpton01eeuar1pc4vzhxekyow9d8xeduwt0pr1u6bosyj13vgm4ai8dfgbnm43xmjhtcyckuyt7odmi2i6d176x8sjmpyj4jp53qr5fu1s2bi3vxj69o5iuycp94z12n6tl9xmkbbt9gxmachzibs6jubf5vaxwglvh2r86l3wa65a4jhwgepsje6e01406q8mhf1y6gtarnbtcv93to2nmb6aeszry3u91odeiaud1cfhti5lcmj9rf2j3r3ryyf8vznjeb9rlm87lc5guh6myk8uiaokcequa7lzzeibu8oegdyrzcz21344pwv22uz56oasptu6mykqirouan31uw66whcc3j8uq9snfg6p0da3ai4dpkdafv2eqbldw4yxtrma57xfrjy16od5ha9sikptmzrw313rnt6xwf4o8130sevqno82vwoguswmvtgg8kuutvuwdy3sdl2bpzdm8i3rjskdgp02ub33sbueb32dc7n1vkiw4g2ngiazdk0sjivfudmtmviws9nyplfvrp2e1at78hm5c0evzk5q9spsr21i2clihqv4vitmfbm59p5073ueclw0nr1dmvnallq1nscq58zul1az6pldjwk54hl7o77vmdslbc34ksb1ke9u2nki1v8qiu4035r0e3o8oropefa9oxhpe5efnbm6w41rwfks9v85xpniiuw0e9hq87dnqlsui4y1bqrtvkjn3r783nzo6rzs6al5gjs3qbx50rjle56m1m503fp95oe2xtq6jm8ip7lx21f4nd65f51obcezwijag6kzgxn3v21t3hqennbpbdviqsvv5aw0drpbbsqylap5gnu4n5c93mbh98tnkenvqw4kwo88rnta2nukr7ehfauum35tcboquslto4hg9yv77ivhq8j788lv8iby8e80c7kp0ldujyey3id2967pcl5nj85r613kndqpx4k4slvmqrzf4cuq6apq7q8x8oiuuvlh6ymix6wbz7oulrxmlaj4kdzmnbb45139chqcdewzusrs57bq0h8a4hlm09fpj9ttxdc9a1dbzfkukpjl45d7xi9kwfs64541n9y2l6weli9pjacmwz9fkoxzb4b2ohrkpjc3sqeax2ja68l2ng5fz503w60b2a5txv3xeqwk2q7zknf51bnq9vh43hfhu2ehcodc3tbnltiktkwthnnhoxck2mt8vbgk74q4jcq4x6rz70zyh9iwnwvozqh7aq4em1szd1pcztpcdbhpxu1ewu28qz64tp1avahcsm1n430cmgu2fuqe92l221d5kzs4a42uufj4yrcaxg0c1g9r2bltzb74b3dbl9n6w7cnv1r3pyvwghngl6454ribct9qoyh1aev6uwmv7380kyh1ofwqby21e2tvrkajvqokr365gy4pbm0q4fj2h1gc23wcjketwlivhnq1oct4iiuogflmtos53fgm9tw7kjhbzawk10fxhk6fqbcr9tsbohf5q0099m9nbwe135ljq9lftvm3jj5yw36dbw7rxdxoxk2mmbccld4qy4iaaqq5s120ad771t57sgwm7ml71dgk6q3iad2jxzg2zzktydv5c3ohxpg0mzi1891py4xhg7scvtgxqz6sgb0qfeyfztku27a43hp58hs679bnm8ttcf3smuxqlvjbxqfgcqg4c725jynyf4ek1rx3j328hrlbz8qdyq5wtwrdhh19jdw4efyaaxuc2vogiff27xbj6h11d4d1pt2x3guv7t5vxdhlp9hdntpfm4q1z8wy34fnfbsq3bnwljwkmo1mbzboknppjxl83ge7op592omiiqv11d4zu7gqu2h2yf8ouz9gyejmy5roqy5avmlywoulp8r382tm1ul5tvpiu4yg2iw5q5g8h8otmscl9zuek8fuznubad9go7y4soqorb5re8c8uyqztamowe4spywsbx1ufgs8g9xnzblxagqcaygvkf4of7sdz82zl1zlzh3appdlfcnihzcx0wsh8e1q6ecax50zi6ygvl72pt3ixjy0ywjku5slvkxrk5xp4a2zc54o2yw23tti1zot7rlqjeuz65xmj452aphlyfhio0q4q84qf8a4cxftmfd51tpq4ug5wdfw9af8q61jhdgmfsa55a7b5elo7rxz3tyqq7qns206moncuik2hm2nwllrmhd1gppehuvrh103t815cgdv0uaau6iepne2069ng7c72a83t9z1z5o77akrxbuknxu9mukq2phfxivutlt10kev00s7b4los04y010dxlx34apihyhp9310wc2dr87vwx7cuf9m5yvdua64db1ys1k0uz9zls80sj8ub3knkrx3j0wh56gv3xogwfojiah3ymbb7i9snejxv0tfnsi0c0n7r13fsq5hx7kli6siq0k0xq2el3sbnhjq6yajfsf5rmjipq44pue3fhqledv70atev9ofpafuz4jcb2fi6h2jmlsajuel0md6x1klphjn9pln1odr7yghpsynfqys6hbgm8op7w9uo9n3otfeoeahp547ubf9tjh30m5dyrziyejclj9wol632cjszfyrvvq6wkxxc45dpjsbg3gtz6wityz3blizpyhqouob53umyei8f4i8ocjzuvaxo9qar685ymn3six4nw0p4kf1r1pd0llqpk8583rjwvoqa0kopeabg04crkl9iq1yp3wvgenkyq6ylbhs6tg7dn7ymgmk9f6l2ex2c1ula5p1btjvnhejmtqnucz8fnjckpxxp7alp84n1htipc0mc9tjjrjm9vssuoc55mc5fapq115dun1h0lfnfp9t0d2n89zfg7efu496tx496qada4iatbcxeoxelcfg3rc0ddc9nscetpvmnc5ff11vob6zw6ytcur9ga5phks2njrgtrujcf0raigdqenzq7fonj8u56gvn4qizma3ip0szjd0zuvf083grnybgxj5eay18movhlswz46l0iz88ki9iee0rouhdete9xd731urj01oez94wgkx4fakancm03bgv9f83fvx16opu9lm01dgjfiivdyke5flfg9m46a9aq347esdvklvtpqzucj4bvig139xhoqgkvosedzukzsb7lunx1zaarj52heevsc38b3ly9wexbtoz0fbwk7wejf42wjrlz68q2lxvnfi61uyie4xvzln61aahkkvksfjd0w8fkvjti321xh4fblns74wv2ivq5qu1z4o8df3n024hqh75vr719hsjdlab7q01gza3p5lrybyy5wyri1y1nqtvzgdgsxg1umtx7dttksua739y4121hwxwy8hikqgtcelt6jfy1ij8od4xtd25xev0jz0fnbcgceguq33kp3m08rmd48o1wy6xtzcrd399o55sgtirg8dqd8f6h5sx44n1pihcl 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:22.216 08:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:22.216 [2024-12-11 08:40:29.854723] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:22.216 [2024-12-11 08:40:29.854825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61017 ] 00:06:22.216 { 00:06:22.216 "subsystems": [ 00:06:22.216 { 00:06:22.216 "subsystem": "bdev", 00:06:22.216 "config": [ 00:06:22.216 { 00:06:22.216 "params": { 00:06:22.216 "trtype": "pcie", 00:06:22.216 "traddr": "0000:00:10.0", 00:06:22.216 "name": "Nvme0" 00:06:22.216 }, 00:06:22.216 "method": "bdev_nvme_attach_controller" 00:06:22.216 }, 00:06:22.216 { 00:06:22.216 "method": "bdev_wait_for_examine" 00:06:22.216 } 00:06:22.216 ] 00:06:22.216 } 00:06:22.216 ] 00:06:22.216 } 00:06:22.475 [2024-12-11 08:40:30.002659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.475 [2024-12-11 08:40:30.038464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.476 [2024-12-11 08:40:30.070055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.476  [2024-12-11T08:40:30.509Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:22.735 00:06:22.735 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:22.735 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:22.735 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:22.735 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:22.735 [2024-12-11 08:40:30.336823] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:22.735 [2024-12-11 08:40:30.336908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61030 ] 00:06:22.735 { 00:06:22.735 "subsystems": [ 00:06:22.735 { 00:06:22.735 "subsystem": "bdev", 00:06:22.735 "config": [ 00:06:22.735 { 00:06:22.735 "params": { 00:06:22.735 "trtype": "pcie", 00:06:22.735 "traddr": "0000:00:10.0", 00:06:22.735 "name": "Nvme0" 00:06:22.735 }, 00:06:22.735 "method": "bdev_nvme_attach_controller" 00:06:22.735 }, 00:06:22.735 { 00:06:22.735 "method": "bdev_wait_for_examine" 00:06:22.735 } 00:06:22.735 ] 00:06:22.735 } 00:06:22.735 ] 00:06:22.735 } 00:06:22.735 [2024-12-11 08:40:30.476348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.994 [2024-12-11 08:40:30.509239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.994 [2024-12-11 08:40:30.542731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.994  [2024-12-11T08:40:30.768Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:22.994 00:06:23.254 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 3d1np1ztitevzak44xo48dz9a3mlf30g6l3lrl4dc9abco1vw7xlz758ovy6z01c78rghvmsg0hwfgx1t6q2a25bzn4l687cjs82gglkx3726uutissvmyoub0n2utulzdbo7zy83vjlgoqsczts8wuz3e2u9zedismgmrvl17ybbtkk6488r503ruytfsilxbqnudfcujdwmfom6lzs1g3ml5jm7yzwkpwfy70impr6ogoad704789kqgvs2bfrlzp7vzbsonxkp4p3r1c0sg551zhn5v3tb4r2lo470bd5zytmua7v55orikyjjlcjx8ryijghyebsnsu5ktmvha76beu0rwuab725rdt1w59asfrwfddqbnp357bsx0xbegf5trbwdodkl6phnabstsu2pw60n19m4ipc0d3hcm4k34hhwpw3wvi6pp83fk9no6ufq7m4p2ou0zj7d5tttysuk3ryiluvix12q6wcgiy40z6lpydmkvl5rlflxqd43tlk5e8u97g7atlsvq7bnezkm33k3h58fb6f13gkw165bctbn5favigrkz8ww63st29948k9thmwyix9ndy5ag1dlkyeuep1uvfi0oruomhnpodjrpton01eeuar1pc4vzhxekyow9d8xeduwt0pr1u6bosyj13vgm4ai8dfgbnm43xmjhtcyckuyt7odmi2i6d176x8sjmpyj4jp53qr5fu1s2bi3vxj69o5iuycp94z12n6tl9xmkbbt9gxmachzibs6jubf5vaxwglvh2r86l3wa65a4jhwgepsje6e01406q8mhf1y6gtarnbtcv93to2nmb6aeszry3u91odeiaud1cfhti5lcmj9rf2j3r3ryyf8vznjeb9rlm87lc5guh6myk8uiaokcequa7lzzeibu8oegdyrzcz21344pwv22uz56oasptu6mykqirouan31uw66whcc3j8uq9snfg6p0da3ai4dpkdafv2eqbldw4yxtrma57xfrjy16od5ha9sikptmzrw313rnt6xwf4o8130sevqno82vwoguswmvtgg8kuutvuwdy3sdl2bpzdm8i3rjskdgp02ub33sbueb32dc7n1vkiw4g2ngiazdk0sjivfudmtmviws9nyplfvrp2e1at78hm5c0evzk5q9spsr21i2clihqv4vitmfbm59p5073ueclw0nr1dmvnallq1nscq58zul1az6pldjwk54hl7o77vmdslbc34ksb1ke9u2nki1v8qiu4035r0e3o8oropefa9oxhpe5efnbm6w41rwfks9v85xpniiuw0e9hq87dnqlsui4y1bqrtvkjn3r783nzo6rzs6al5gjs3qbx50rjle56m1m503fp95oe2xtq6jm8ip7lx21f4nd65f51obcezwijag6kzgxn3v21t3hqennbpbdviqsvv5aw0drpbbsqylap5gnu4n5c93mbh98tnkenvqw4kwo88rnta2nukr7ehfauum35tcboquslto4hg9yv77ivhq8j788lv8iby8e80c7kp0ldujyey3id2967pcl5nj85r613kndqpx4k4slvmqrzf4cuq6apq7q8x8oiuuvlh6ymix6wbz7oulrxmlaj4kdzmnbb45139chqcdewzusrs57bq0h8a4hlm09fpj9ttxdc9a1dbzfkukpjl45d7xi9kwfs64541n9y2l6weli9pjacmwz9fkoxzb4b2ohrkpjc3sqeax2ja68l2ng5fz503w60b2a5txv3xeqwk2q7zknf51bnq9vh43hfhu2ehcodc3tbnltiktkwthnnhoxck2mt8vbgk74q4jcq4x6rz70zyh9iwnwvozqh7aq4em1szd1pcztpcdbhpxu1ewu28qz64tp1avahcsm1n430cmgu2fuqe92l221d5kzs4a42uufj4yrcaxg0c1g9r2bltzb74b3dbl9n6w7cnv1r3pyvwghngl6454ribct9qoyh1aev6uwmv7380kyh1ofwqby21e2tvrkajvqokr365gy4pbm0q4fj2h1gc23wcjketwlivhnq1oct4iiuogflmtos53fgm9tw7kjhbzawk10fxhk6fqbcr9tsbohf5q0099m9nbwe135ljq9lftvm3jj5yw36dbw7rxdxoxk2mmbccld4qy4iaaqq5s120ad771t57sgwm7ml71dgk6q3iad2jxzg2zzktydv5c3ohxpg0mzi1891py4xhg7scvtgxqz6sgb0qfeyfztku27a43hp58hs679bnm8ttcf3smuxqlvjbxqfgcqg4c725jynyf4ek1rx3j328hrlbz8qdyq5wtwrdhh19jdw4efyaaxuc2vogiff27xbj6h11d4d1pt2x3guv7t5vxdhlp9hdntpfm4q1z8wy34fnfbsq3bnwljwkmo1mbzboknppjxl83ge7op592omiiqv11d4zu7gqu2h2yf8ouz9gyejmy5roqy5avmlywoulp8r382tm1ul5tvpiu4yg2iw5q5g8h8otmscl9zuek8fuznubad9go7y4soqorb5re8c8uyqztamowe4spywsbx1ufgs8g9xnzblxagqcaygvkf4of7sdz82zl1zlzh3appdlfcnihzcx0wsh8e1q6ecax50zi6ygvl72pt3ixjy0ywjku5slvkxrk5xp4a2zc54o2yw23tti1zot7rlqjeuz65xmj452aphlyfhio0q4q84qf8a4cxftmfd51tpq4ug5wdfw9af8q61jhdgmfsa55a7b5elo7rxz3tyqq7qns206moncuik2hm2nwllrmhd1gppehuvrh103t815cgdv0uaau6iepne2069ng7c72a83t9z1z5o77akrxbuknxu9mukq2phfxivutlt10kev00s7b4los04y010dxlx34apihyhp9310wc2dr87vwx7cuf9m5yvdua64db1ys1k0uz9zls80sj8ub3knkrx3j0wh56gv3xogwfojiah3ymbb7i9snejxv0tfnsi0c0n7r13fsq5hx7kli6siq0k0xq2el3sbnhjq6yajfsf5rmjipq44pue3fhqledv70atev9ofpafuz4jcb2fi6h2jmlsajuel0md6x1klphjn9pln1odr7yghpsynfqys6hbgm8op7w9uo9n3otfeoeahp547ubf9tjh30m5dyrziyejclj9wol632cjszfyrvvq6wkxxc45dpjsbg3gtz6wityz3blizpyhqouob53umyei8f4i8ocjzuvaxo9qar685ymn3six4nw0p4kf1r1pd0llqpk8583rjwvoqa0kopeabg04crkl9iq1yp3wvgenkyq6ylbhs6tg7dn7ymgmk9f6l2ex2c1ula5p1btjvnhejmtqnucz8fnjckpxxp7alp84n1htipc0mc9tjjrjm9vssuoc55mc5fapq115dun1h0lfnfp9t0d2n89zfg7efu496tx496qada4iatbcxeoxelcfg3rc0ddc9nscetpvmnc5ff11vob6zw6ytcur9ga5phks2njrgtrujcf0raigdqenzq7fonj8u56gvn4qizma3ip0szjd0zuvf083grnybgxj5eay18movhlswz46l0iz88ki9iee0rouhdete9xd731urj01oez94wgkx4fakancm03bgv9f83fvx16opu9lm01dgjfiivdyke5flfg9m46a9aq347esdvklvtpqzucj4bvig139xhoqgkvosedzukzsb7lunx1zaarj52heevsc38b3ly9wexbtoz0fbwk7wejf42wjrlz68q2lxvnfi61uyie4xvzln61aahkkvksfjd0w8fkvjti321xh4fblns74wv2ivq5qu1z4o8df3n024hqh75vr719hsjdlab7q01gza3p5lrybyy5wyri1y1nqtvzgdgsxg1umtx7dttksua739y4121hwxwy8hikqgtcelt6jfy1ij8od4xtd25xev0jz0fnbcgceguq33kp3m08rmd48o1wy6xtzcrd399o55sgtirg8dqd8f6h5sx44n1pihcl == \3\d\1\n\p\1\z\t\i\t\e\v\z\a\k\4\4\x\o\4\8\d\z\9\a\3\m\l\f\3\0\g\6\l\3\l\r\l\4\d\c\9\a\b\c\o\1\v\w\7\x\l\z\7\5\8\o\v\y\6\z\0\1\c\7\8\r\g\h\v\m\s\g\0\h\w\f\g\x\1\t\6\q\2\a\2\5\b\z\n\4\l\6\8\7\c\j\s\8\2\g\g\l\k\x\3\7\2\6\u\u\t\i\s\s\v\m\y\o\u\b\0\n\2\u\t\u\l\z\d\b\o\7\z\y\8\3\v\j\l\g\o\q\s\c\z\t\s\8\w\u\z\3\e\2\u\9\z\e\d\i\s\m\g\m\r\v\l\1\7\y\b\b\t\k\k\6\4\8\8\r\5\0\3\r\u\y\t\f\s\i\l\x\b\q\n\u\d\f\c\u\j\d\w\m\f\o\m\6\l\z\s\1\g\3\m\l\5\j\m\7\y\z\w\k\p\w\f\y\7\0\i\m\p\r\6\o\g\o\a\d\7\0\4\7\8\9\k\q\g\v\s\2\b\f\r\l\z\p\7\v\z\b\s\o\n\x\k\p\4\p\3\r\1\c\0\s\g\5\5\1\z\h\n\5\v\3\t\b\4\r\2\l\o\4\7\0\b\d\5\z\y\t\m\u\a\7\v\5\5\o\r\i\k\y\j\j\l\c\j\x\8\r\y\i\j\g\h\y\e\b\s\n\s\u\5\k\t\m\v\h\a\7\6\b\e\u\0\r\w\u\a\b\7\2\5\r\d\t\1\w\5\9\a\s\f\r\w\f\d\d\q\b\n\p\3\5\7\b\s\x\0\x\b\e\g\f\5\t\r\b\w\d\o\d\k\l\6\p\h\n\a\b\s\t\s\u\2\p\w\6\0\n\1\9\m\4\i\p\c\0\d\3\h\c\m\4\k\3\4\h\h\w\p\w\3\w\v\i\6\p\p\8\3\f\k\9\n\o\6\u\f\q\7\m\4\p\2\o\u\0\z\j\7\d\5\t\t\t\y\s\u\k\3\r\y\i\l\u\v\i\x\1\2\q\6\w\c\g\i\y\4\0\z\6\l\p\y\d\m\k\v\l\5\r\l\f\l\x\q\d\4\3\t\l\k\5\e\8\u\9\7\g\7\a\t\l\s\v\q\7\b\n\e\z\k\m\3\3\k\3\h\5\8\f\b\6\f\1\3\g\k\w\1\6\5\b\c\t\b\n\5\f\a\v\i\g\r\k\z\8\w\w\6\3\s\t\2\9\9\4\8\k\9\t\h\m\w\y\i\x\9\n\d\y\5\a\g\1\d\l\k\y\e\u\e\p\1\u\v\f\i\0\o\r\u\o\m\h\n\p\o\d\j\r\p\t\o\n\0\1\e\e\u\a\r\1\p\c\4\v\z\h\x\e\k\y\o\w\9\d\8\x\e\d\u\w\t\0\p\r\1\u\6\b\o\s\y\j\1\3\v\g\m\4\a\i\8\d\f\g\b\n\m\4\3\x\m\j\h\t\c\y\c\k\u\y\t\7\o\d\m\i\2\i\6\d\1\7\6\x\8\s\j\m\p\y\j\4\j\p\5\3\q\r\5\f\u\1\s\2\b\i\3\v\x\j\6\9\o\5\i\u\y\c\p\9\4\z\1\2\n\6\t\l\9\x\m\k\b\b\t\9\g\x\m\a\c\h\z\i\b\s\6\j\u\b\f\5\v\a\x\w\g\l\v\h\2\r\8\6\l\3\w\a\6\5\a\4\j\h\w\g\e\p\s\j\e\6\e\0\1\4\0\6\q\8\m\h\f\1\y\6\g\t\a\r\n\b\t\c\v\9\3\t\o\2\n\m\b\6\a\e\s\z\r\y\3\u\9\1\o\d\e\i\a\u\d\1\c\f\h\t\i\5\l\c\m\j\9\r\f\2\j\3\r\3\r\y\y\f\8\v\z\n\j\e\b\9\r\l\m\8\7\l\c\5\g\u\h\6\m\y\k\8\u\i\a\o\k\c\e\q\u\a\7\l\z\z\e\i\b\u\8\o\e\g\d\y\r\z\c\z\2\1\3\4\4\p\w\v\2\2\u\z\5\6\o\a\s\p\t\u\6\m\y\k\q\i\r\o\u\a\n\3\1\u\w\6\6\w\h\c\c\3\j\8\u\q\9\s\n\f\g\6\p\0\d\a\3\a\i\4\d\p\k\d\a\f\v\2\e\q\b\l\d\w\4\y\x\t\r\m\a\5\7\x\f\r\j\y\1\6\o\d\5\h\a\9\s\i\k\p\t\m\z\r\w\3\1\3\r\n\t\6\x\w\f\4\o\8\1\3\0\s\e\v\q\n\o\8\2\v\w\o\g\u\s\w\m\v\t\g\g\8\k\u\u\t\v\u\w\d\y\3\s\d\l\2\b\p\z\d\m\8\i\3\r\j\s\k\d\g\p\0\2\u\b\3\3\s\b\u\e\b\3\2\d\c\7\n\1\v\k\i\w\4\g\2\n\g\i\a\z\d\k\0\s\j\i\v\f\u\d\m\t\m\v\i\w\s\9\n\y\p\l\f\v\r\p\2\e\1\a\t\7\8\h\m\5\c\0\e\v\z\k\5\q\9\s\p\s\r\2\1\i\2\c\l\i\h\q\v\4\v\i\t\m\f\b\m\5\9\p\5\0\7\3\u\e\c\l\w\0\n\r\1\d\m\v\n\a\l\l\q\1\n\s\c\q\5\8\z\u\l\1\a\z\6\p\l\d\j\w\k\5\4\h\l\7\o\7\7\v\m\d\s\l\b\c\3\4\k\s\b\1\k\e\9\u\2\n\k\i\1\v\8\q\i\u\4\0\3\5\r\0\e\3\o\8\o\r\o\p\e\f\a\9\o\x\h\p\e\5\e\f\n\b\m\6\w\4\1\r\w\f\k\s\9\v\8\5\x\p\n\i\i\u\w\0\e\9\h\q\8\7\d\n\q\l\s\u\i\4\y\1\b\q\r\t\v\k\j\n\3\r\7\8\3\n\z\o\6\r\z\s\6\a\l\5\g\j\s\3\q\b\x\5\0\r\j\l\e\5\6\m\1\m\5\0\3\f\p\9\5\o\e\2\x\t\q\6\j\m\8\i\p\7\l\x\2\1\f\4\n\d\6\5\f\5\1\o\b\c\e\z\w\i\j\a\g\6\k\z\g\x\n\3\v\2\1\t\3\h\q\e\n\n\b\p\b\d\v\i\q\s\v\v\5\a\w\0\d\r\p\b\b\s\q\y\l\a\p\5\g\n\u\4\n\5\c\9\3\m\b\h\9\8\t\n\k\e\n\v\q\w\4\k\w\o\8\8\r\n\t\a\2\n\u\k\r\7\e\h\f\a\u\u\m\3\5\t\c\b\o\q\u\s\l\t\o\4\h\g\9\y\v\7\7\i\v\h\q\8\j\7\8\8\l\v\8\i\b\y\8\e\8\0\c\7\k\p\0\l\d\u\j\y\e\y\3\i\d\2\9\6\7\p\c\l\5\n\j\8\5\r\6\1\3\k\n\d\q\p\x\4\k\4\s\l\v\m\q\r\z\f\4\c\u\q\6\a\p\q\7\q\8\x\8\o\i\u\u\v\l\h\6\y\m\i\x\6\w\b\z\7\o\u\l\r\x\m\l\a\j\4\k\d\z\m\n\b\b\4\5\1\3\9\c\h\q\c\d\e\w\z\u\s\r\s\5\7\b\q\0\h\8\a\4\h\l\m\0\9\f\p\j\9\t\t\x\d\c\9\a\1\d\b\z\f\k\u\k\p\j\l\4\5\d\7\x\i\9\k\w\f\s\6\4\5\4\1\n\9\y\2\l\6\w\e\l\i\9\p\j\a\c\m\w\z\9\f\k\o\x\z\b\4\b\2\o\h\r\k\p\j\c\3\s\q\e\a\x\2\j\a\6\8\l\2\n\g\5\f\z\5\0\3\w\6\0\b\2\a\5\t\x\v\3\x\e\q\w\k\2\q\7\z\k\n\f\5\1\b\n\q\9\v\h\4\3\h\f\h\u\2\e\h\c\o\d\c\3\t\b\n\l\t\i\k\t\k\w\t\h\n\n\h\o\x\c\k\2\m\t\8\v\b\g\k\7\4\q\4\j\c\q\4\x\6\r\z\7\0\z\y\h\9\i\w\n\w\v\o\z\q\h\7\a\q\4\e\m\1\s\z\d\1\p\c\z\t\p\c\d\b\h\p\x\u\1\e\w\u\2\8\q\z\6\4\t\p\1\a\v\a\h\c\s\m\1\n\4\3\0\c\m\g\u\2\f\u\q\e\9\2\l\2\2\1\d\5\k\z\s\4\a\4\2\u\u\f\j\4\y\r\c\a\x\g\0\c\1\g\9\r\2\b\l\t\z\b\7\4\b\3\d\b\l\9\n\6\w\7\c\n\v\1\r\3\p\y\v\w\g\h\n\g\l\6\4\5\4\r\i\b\c\t\9\q\o\y\h\1\a\e\v\6\u\w\m\v\7\3\8\0\k\y\h\1\o\f\w\q\b\y\2\1\e\2\t\v\r\k\a\j\v\q\o\k\r\3\6\5\g\y\4\p\b\m\0\q\4\f\j\2\h\1\g\c\2\3\w\c\j\k\e\t\w\l\i\v\h\n\q\1\o\c\t\4\i\i\u\o\g\f\l\m\t\o\s\5\3\f\g\m\9\t\w\7\k\j\h\b\z\a\w\k\1\0\f\x\h\k\6\f\q\b\c\r\9\t\s\b\o\h\f\5\q\0\0\9\9\m\9\n\b\w\e\1\3\5\l\j\q\9\l\f\t\v\m\3\j\j\5\y\w\3\6\d\b\w\7\r\x\d\x\o\x\k\2\m\m\b\c\c\l\d\4\q\y\4\i\a\a\q\q\5\s\1\2\0\a\d\7\7\1\t\5\7\s\g\w\m\7\m\l\7\1\d\g\k\6\q\3\i\a\d\2\j\x\z\g\2\z\z\k\t\y\d\v\5\c\3\o\h\x\p\g\0\m\z\i\1\8\9\1\p\y\4\x\h\g\7\s\c\v\t\g\x\q\z\6\s\g\b\0\q\f\e\y\f\z\t\k\u\2\7\a\4\3\h\p\5\8\h\s\6\7\9\b\n\m\8\t\t\c\f\3\s\m\u\x\q\l\v\j\b\x\q\f\g\c\q\g\4\c\7\2\5\j\y\n\y\f\4\e\k\1\r\x\3\j\3\2\8\h\r\l\b\z\8\q\d\y\q\5\w\t\w\r\d\h\h\1\9\j\d\w\4\e\f\y\a\a\x\u\c\2\v\o\g\i\f\f\2\7\x\b\j\6\h\1\1\d\4\d\1\p\t\2\x\3\g\u\v\7\t\5\v\x\d\h\l\p\9\h\d\n\t\p\f\m\4\q\1\z\8\w\y\3\4\f\n\f\b\s\q\3\b\n\w\l\j\w\k\m\o\1\m\b\z\b\o\k\n\p\p\j\x\l\8\3\g\e\7\o\p\5\9\2\o\m\i\i\q\v\1\1\d\4\z\u\7\g\q\u\2\h\2\y\f\8\o\u\z\9\g\y\e\j\m\y\5\r\o\q\y\5\a\v\m\l\y\w\o\u\l\p\8\r\3\8\2\t\m\1\u\l\5\t\v\p\i\u\4\y\g\2\i\w\5\q\5\g\8\h\8\o\t\m\s\c\l\9\z\u\e\k\8\f\u\z\n\u\b\a\d\9\g\o\7\y\4\s\o\q\o\r\b\5\r\e\8\c\8\u\y\q\z\t\a\m\o\w\e\4\s\p\y\w\s\b\x\1\u\f\g\s\8\g\9\x\n\z\b\l\x\a\g\q\c\a\y\g\v\k\f\4\o\f\7\s\d\z\8\2\z\l\1\z\l\z\h\3\a\p\p\d\l\f\c\n\i\h\z\c\x\0\w\s\h\8\e\1\q\6\e\c\a\x\5\0\z\i\6\y\g\v\l\7\2\p\t\3\i\x\j\y\0\y\w\j\k\u\5\s\l\v\k\x\r\k\5\x\p\4\a\2\z\c\5\4\o\2\y\w\2\3\t\t\i\1\z\o\t\7\r\l\q\j\e\u\z\6\5\x\m\j\4\5\2\a\p\h\l\y\f\h\i\o\0\q\4\q\8\4\q\f\8\a\4\c\x\f\t\m\f\d\5\1\t\p\q\4\u\g\5\w\d\f\w\9\a\f\8\q\6\1\j\h\d\g\m\f\s\a\5\5\a\7\b\5\e\l\o\7\r\x\z\3\t\y\q\q\7\q\n\s\2\0\6\m\o\n\c\u\i\k\2\h\m\2\n\w\l\l\r\m\h\d\1\g\p\p\e\h\u\v\r\h\1\0\3\t\8\1\5\c\g\d\v\0\u\a\a\u\6\i\e\p\n\e\2\0\6\9\n\g\7\c\7\2\a\8\3\t\9\z\1\z\5\o\7\7\a\k\r\x\b\u\k\n\x\u\9\m\u\k\q\2\p\h\f\x\i\v\u\t\l\t\1\0\k\e\v\0\0\s\7\b\4\l\o\s\0\4\y\0\1\0\d\x\l\x\3\4\a\p\i\h\y\h\p\9\3\1\0\w\c\2\d\r\8\7\v\w\x\7\c\u\f\9\m\5\y\v\d\u\a\6\4\d\b\1\y\s\1\k\0\u\z\9\z\l\s\8\0\s\j\8\u\b\3\k\n\k\r\x\3\j\0\w\h\5\6\g\v\3\x\o\g\w\f\o\j\i\a\h\3\y\m\b\b\7\i\9\s\n\e\j\x\v\0\t\f\n\s\i\0\c\0\n\7\r\1\3\f\s\q\5\h\x\7\k\l\i\6\s\i\q\0\k\0\x\q\2\e\l\3\s\b\n\h\j\q\6\y\a\j\f\s\f\5\r\m\j\i\p\q\4\4\p\u\e\3\f\h\q\l\e\d\v\7\0\a\t\e\v\9\o\f\p\a\f\u\z\4\j\c\b\2\f\i\6\h\2\j\m\l\s\a\j\u\e\l\0\m\d\6\x\1\k\l\p\h\j\n\9\p\l\n\1\o\d\r\7\y\g\h\p\s\y\n\f\q\y\s\6\h\b\g\m\8\o\p\7\w\9\u\o\9\n\3\o\t\f\e\o\e\a\h\p\5\4\7\u\b\f\9\t\j\h\3\0\m\5\d\y\r\z\i\y\e\j\c\l\j\9\w\o\l\6\3\2\c\j\s\z\f\y\r\v\v\q\6\w\k\x\x\c\4\5\d\p\j\s\b\g\3\g\t\z\6\w\i\t\y\z\3\b\l\i\z\p\y\h\q\o\u\o\b\5\3\u\m\y\e\i\8\f\4\i\8\o\c\j\z\u\v\a\x\o\9\q\a\r\6\8\5\y\m\n\3\s\i\x\4\n\w\0\p\4\k\f\1\r\1\p\d\0\l\l\q\p\k\8\5\8\3\r\j\w\v\o\q\a\0\k\o\p\e\a\b\g\0\4\c\r\k\l\9\i\q\1\y\p\3\w\v\g\e\n\k\y\q\6\y\l\b\h\s\6\t\g\7\d\n\7\y\m\g\m\k\9\f\6\l\2\e\x\2\c\1\u\l\a\5\p\1\b\t\j\v\n\h\e\j\m\t\q\n\u\c\z\8\f\n\j\c\k\p\x\x\p\7\a\l\p\8\4\n\1\h\t\i\p\c\0\m\c\9\t\j\j\r\j\m\9\v\s\s\u\o\c\5\5\m\c\5\f\a\p\q\1\1\5\d\u\n\1\h\0\l\f\n\f\p\9\t\0\d\2\n\8\9\z\f\g\7\e\f\u\4\9\6\t\x\4\9\6\q\a\d\a\4\i\a\t\b\c\x\e\o\x\e\l\c\f\g\3\r\c\0\d\d\c\9\n\s\c\e\t\p\v\m\n\c\5\f\f\1\1\v\o\b\6\z\w\6\y\t\c\u\r\9\g\a\5\p\h\k\s\2\n\j\r\g\t\r\u\j\c\f\0\r\a\i\g\d\q\e\n\z\q\7\f\o\n\j\8\u\5\6\g\v\n\4\q\i\z\m\a\3\i\p\0\s\z\j\d\0\z\u\v\f\0\8\3\g\r\n\y\b\g\x\j\5\e\a\y\1\8\m\o\v\h\l\s\w\z\4\6\l\0\i\z\8\8\k\i\9\i\e\e\0\r\o\u\h\d\e\t\e\9\x\d\7\3\1\u\r\j\0\1\o\e\z\9\4\w\g\k\x\4\f\a\k\a\n\c\m\0\3\b\g\v\9\f\8\3\f\v\x\1\6\o\p\u\9\l\m\0\1\d\g\j\f\i\i\v\d\y\k\e\5\f\l\f\g\9\m\4\6\a\9\a\q\3\4\7\e\s\d\v\k\l\v\t\p\q\z\u\c\j\4\b\v\i\g\1\3\9\x\h\o\q\g\k\v\o\s\e\d\z\u\k\z\s\b\7\l\u\n\x\1\z\a\a\r\j\5\2\h\e\e\v\s\c\3\8\b\3\l\y\9\w\e\x\b\t\o\z\0\f\b\w\k\7\w\e\j\f\4\2\w\j\r\l\z\6\8\q\2\l\x\v\n\f\i\6\1\u\y\i\e\4\x\v\z\l\n\6\1\a\a\h\k\k\v\k\s\f\j\d\0\w\8\f\k\v\j\t\i\3\2\1\x\h\4\f\b\l\n\s\7\4\w\v\2\i\v\q\5\q\u\1\z\4\o\8\d\f\3\n\0\2\4\h\q\h\7\5\v\r\7\1\9\h\s\j\d\l\a\b\7\q\0\1\g\z\a\3\p\5\l\r\y\b\y\y\5\w\y\r\i\1\y\1\n\q\t\v\z\g\d\g\s\x\g\1\u\m\t\x\7\d\t\t\k\s\u\a\7\3\9\y\4\1\2\1\h\w\x\w\y\8\h\i\k\q\g\t\c\e\l\t\6\j\f\y\1\i\j\8\o\d\4\x\t\d\2\5\x\e\v\0\j\z\0\f\n\b\c\g\c\e\g\u\q\3\3\k\p\3\m\0\8\r\m\d\4\8\o\1\w\y\6\x\t\z\c\r\d\3\9\9\o\5\5\s\g\t\i\r\g\8\d\q\d\8\f\6\h\5\s\x\4\4\n\1\p\i\h\c\l ]] 00:06:23.255 00:06:23.255 real 0m1.018s 00:06:23.255 user 0m0.717s 00:06:23.255 sys 0m0.389s 00:06:23.255 ************************************ 00:06:23.255 END TEST dd_rw_offset 00:06:23.255 ************************************ 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.255 08:40:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.255 [2024-12-11 08:40:30.872304] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:23.255 [2024-12-11 08:40:30.872403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61060 ] 00:06:23.255 { 00:06:23.255 "subsystems": [ 00:06:23.255 { 00:06:23.255 "subsystem": "bdev", 00:06:23.255 "config": [ 00:06:23.255 { 00:06:23.255 "params": { 00:06:23.255 "trtype": "pcie", 00:06:23.255 "traddr": "0000:00:10.0", 00:06:23.255 "name": "Nvme0" 00:06:23.255 }, 00:06:23.255 "method": "bdev_nvme_attach_controller" 00:06:23.255 }, 00:06:23.255 { 00:06:23.255 "method": "bdev_wait_for_examine" 00:06:23.255 } 00:06:23.255 ] 00:06:23.255 } 00:06:23.255 ] 00:06:23.255 } 00:06:23.255 [2024-12-11 08:40:31.016881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.514 [2024-12-11 08:40:31.054418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.514 [2024-12-11 08:40:31.084476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.514  [2024-12-11T08:40:31.548Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:23.774 00:06:23.774 08:40:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.774 00:06:23.774 real 0m14.699s 00:06:23.774 user 0m10.745s 00:06:23.774 sys 0m4.605s 00:06:23.774 ************************************ 00:06:23.774 END TEST spdk_dd_basic_rw 00:06:23.774 ************************************ 00:06:23.774 08:40:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.774 08:40:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.774 08:40:31 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:23.774 08:40:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.774 08:40:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.774 08:40:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:23.774 ************************************ 00:06:23.774 START TEST spdk_dd_posix 00:06:23.774 ************************************ 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:23.774 * Looking for test storage... 00:06:23.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.774 --rc genhtml_branch_coverage=1 00:06:23.774 --rc genhtml_function_coverage=1 00:06:23.774 --rc genhtml_legend=1 00:06:23.774 --rc geninfo_all_blocks=1 00:06:23.774 --rc geninfo_unexecuted_blocks=1 00:06:23.774 00:06:23.774 ' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.774 --rc genhtml_branch_coverage=1 00:06:23.774 --rc genhtml_function_coverage=1 00:06:23.774 --rc genhtml_legend=1 00:06:23.774 --rc geninfo_all_blocks=1 00:06:23.774 --rc geninfo_unexecuted_blocks=1 00:06:23.774 00:06:23.774 ' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.774 --rc genhtml_branch_coverage=1 00:06:23.774 --rc genhtml_function_coverage=1 00:06:23.774 --rc genhtml_legend=1 00:06:23.774 --rc geninfo_all_blocks=1 00:06:23.774 --rc geninfo_unexecuted_blocks=1 00:06:23.774 00:06:23.774 ' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.774 --rc genhtml_branch_coverage=1 00:06:23.774 --rc genhtml_function_coverage=1 00:06:23.774 --rc genhtml_legend=1 00:06:23.774 --rc geninfo_all_blocks=1 00:06:23.774 --rc geninfo_unexecuted_blocks=1 00:06:23.774 00:06:23.774 ' 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.774 08:40:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:23.775 * First test run, liburing in use 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.775 08:40:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:24.034 ************************************ 00:06:24.034 START TEST dd_flag_append 00:06:24.034 ************************************ 00:06:24.034 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:24.034 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ox0h9gdiuekdsm2jt2iq2yi0awvsxyh2 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ovuzf4szhxb7iazujnbe18ld07hmqqet 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ox0h9gdiuekdsm2jt2iq2yi0awvsxyh2 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ovuzf4szhxb7iazujnbe18ld07hmqqet 00:06:24.035 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:24.035 [2024-12-11 08:40:31.609652] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:24.035 [2024-12-11 08:40:31.609733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61132 ] 00:06:24.035 [2024-12-11 08:40:31.751430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.035 [2024-12-11 08:40:31.786813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.294 [2024-12-11 08:40:31.818813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.294  [2024-12-11T08:40:32.068Z] Copying: 32/32 [B] (average 31 kBps) 00:06:24.294 00:06:24.294 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ovuzf4szhxb7iazujnbe18ld07hmqqetox0h9gdiuekdsm2jt2iq2yi0awvsxyh2 == \o\v\u\z\f\4\s\z\h\x\b\7\i\a\z\u\j\n\b\e\1\8\l\d\0\7\h\m\q\q\e\t\o\x\0\h\9\g\d\i\u\e\k\d\s\m\2\j\t\2\i\q\2\y\i\0\a\w\v\s\x\y\h\2 ]] 00:06:24.294 00:06:24.294 real 0m0.415s 00:06:24.294 user 0m0.198s 00:06:24.294 sys 0m0.197s 00:06:24.294 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.294 ************************************ 00:06:24.294 08:40:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:24.294 END TEST dd_flag_append 00:06:24.294 ************************************ 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:24.294 ************************************ 00:06:24.294 START TEST dd_flag_directory 00:06:24.294 ************************************ 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:24.294 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.294 [2024-12-11 08:40:32.064718] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:24.294 [2024-12-11 08:40:32.064796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:06:24.554 [2024-12-11 08:40:32.207446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.554 [2024-12-11 08:40:32.246843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.554 [2024-12-11 08:40:32.279147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.554 [2024-12-11 08:40:32.299569] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:24.554 [2024-12-11 08:40:32.299624] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:24.554 [2024-12-11 08:40:32.299638] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.813 [2024-12-11 08:40:32.367661] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:24.813 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:24.813 [2024-12-11 08:40:32.474759] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:24.813 [2024-12-11 08:40:32.474835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:06:25.072 [2024-12-11 08:40:32.613745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.072 [2024-12-11 08:40:32.647795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.072 [2024-12-11 08:40:32.677667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.072 [2024-12-11 08:40:32.696889] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:25.072 [2024-12-11 08:40:32.696985] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:25.072 [2024-12-11 08:40:32.696999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.072 [2024-12-11 08:40:32.761673] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.072 00:06:25.072 real 0m0.805s 00:06:25.072 user 0m0.421s 00:06:25.072 sys 0m0.177s 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.072 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 ************************************ 00:06:25.072 END TEST dd_flag_directory 00:06:25.072 ************************************ 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.332 ************************************ 00:06:25.332 START TEST dd_flag_nofollow 00:06:25.332 ************************************ 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.332 08:40:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.332 [2024-12-11 08:40:32.935230] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:25.332 [2024-12-11 08:40:32.935320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ] 00:06:25.332 [2024-12-11 08:40:33.081734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.591 [2024-12-11 08:40:33.117870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.591 [2024-12-11 08:40:33.153357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.591 [2024-12-11 08:40:33.175369] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:25.591 [2024-12-11 08:40:33.175435] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:25.591 [2024-12-11 08:40:33.175464] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.591 [2024-12-11 08:40:33.242183] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:25.591 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.592 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:25.592 [2024-12-11 08:40:33.361137] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:25.592 [2024-12-11 08:40:33.361255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61202 ] 00:06:25.851 [2024-12-11 08:40:33.501181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.851 [2024-12-11 08:40:33.535060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.851 [2024-12-11 08:40:33.566156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.851 [2024-12-11 08:40:33.585724] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:25.851 [2024-12-11 08:40:33.585788] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:25.851 [2024-12-11 08:40:33.585819] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.110 [2024-12-11 08:40:33.652560] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:26.110 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:26.110 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 08:40:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.111 [2024-12-11 08:40:33.756867] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:26.111 [2024-12-11 08:40:33.756999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61210 ] 00:06:26.370 [2024-12-11 08:40:33.897279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.370 [2024-12-11 08:40:33.931677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.370 [2024-12-11 08:40:33.964106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.370  [2024-12-11T08:40:34.144Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.370 00:06:26.370 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ secsyqro1hhoncc7pvn3ihqf9hycyqsn3flvd3zhiou27fjgoxa1fywx651q5nwx485lgwjqbjhocdsqii9krhw9t46ep9ots7f1jslner6w11js5new96x72jm524d7u3502l9umviaizzq6trmzthrg43q70ibai2nrlihpwrp0un3wcuiukpfcz65i75bwmtvkzug9lkjkqa1x3z65iwx1ti3289278wjpbnr0f3bvqqsjsocgera8k11o4284dzwwkr3ch5muycb5dxr6mouwhrzg5l4ry2vw1hsk2ozaszo7v9ax1d125ygxzjrhpenjl6xrrzy4m52nof1wufjshcyy32cc9q6oec3e3iavx6k6g09w6o4tt6xboeun949v9bw4fv7qqidmvvf1vab9ddrbfc8rzur3pnj31pgny1fsetgbeqccrozeazbrtmi12r924u8wnohjqyk45ch75rjxjhs78h8o72lnrmmwhyic3nquyez4sfd92wx == \s\e\c\s\y\q\r\o\1\h\h\o\n\c\c\7\p\v\n\3\i\h\q\f\9\h\y\c\y\q\s\n\3\f\l\v\d\3\z\h\i\o\u\2\7\f\j\g\o\x\a\1\f\y\w\x\6\5\1\q\5\n\w\x\4\8\5\l\g\w\j\q\b\j\h\o\c\d\s\q\i\i\9\k\r\h\w\9\t\4\6\e\p\9\o\t\s\7\f\1\j\s\l\n\e\r\6\w\1\1\j\s\5\n\e\w\9\6\x\7\2\j\m\5\2\4\d\7\u\3\5\0\2\l\9\u\m\v\i\a\i\z\z\q\6\t\r\m\z\t\h\r\g\4\3\q\7\0\i\b\a\i\2\n\r\l\i\h\p\w\r\p\0\u\n\3\w\c\u\i\u\k\p\f\c\z\6\5\i\7\5\b\w\m\t\v\k\z\u\g\9\l\k\j\k\q\a\1\x\3\z\6\5\i\w\x\1\t\i\3\2\8\9\2\7\8\w\j\p\b\n\r\0\f\3\b\v\q\q\s\j\s\o\c\g\e\r\a\8\k\1\1\o\4\2\8\4\d\z\w\w\k\r\3\c\h\5\m\u\y\c\b\5\d\x\r\6\m\o\u\w\h\r\z\g\5\l\4\r\y\2\v\w\1\h\s\k\2\o\z\a\s\z\o\7\v\9\a\x\1\d\1\2\5\y\g\x\z\j\r\h\p\e\n\j\l\6\x\r\r\z\y\4\m\5\2\n\o\f\1\w\u\f\j\s\h\c\y\y\3\2\c\c\9\q\6\o\e\c\3\e\3\i\a\v\x\6\k\6\g\0\9\w\6\o\4\t\t\6\x\b\o\e\u\n\9\4\9\v\9\b\w\4\f\v\7\q\q\i\d\m\v\v\f\1\v\a\b\9\d\d\r\b\f\c\8\r\z\u\r\3\p\n\j\3\1\p\g\n\y\1\f\s\e\t\g\b\e\q\c\c\r\o\z\e\a\z\b\r\t\m\i\1\2\r\9\2\4\u\8\w\n\o\h\j\q\y\k\4\5\c\h\7\5\r\j\x\j\h\s\7\8\h\8\o\7\2\l\n\r\m\m\w\h\y\i\c\3\n\q\u\y\e\z\4\s\f\d\9\2\w\x ]] 00:06:26.370 00:06:26.370 real 0m1.239s 00:06:26.370 user 0m0.632s 00:06:26.370 sys 0m0.373s 00:06:26.370 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.370 ************************************ 00:06:26.370 END TEST dd_flag_nofollow 00:06:26.370 ************************************ 00:06:26.370 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.629 ************************************ 00:06:26.629 START TEST dd_flag_noatime 00:06:26.629 ************************************ 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733906433 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733906434 00:06:26.629 08:40:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:27.565 08:40:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.565 [2024-12-11 08:40:35.237040] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:27.565 [2024-12-11 08:40:35.237814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61247 ] 00:06:27.823 [2024-12-11 08:40:35.384847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.823 [2024-12-11 08:40:35.424868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.823 [2024-12-11 08:40:35.462437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.823  [2024-12-11T08:40:35.857Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.083 00:06:28.083 08:40:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.083 08:40:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733906433 )) 00:06:28.083 08:40:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.083 08:40:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733906434 )) 00:06:28.083 08:40:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.083 [2024-12-11 08:40:35.678217] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:28.083 [2024-12-11 08:40:35.678318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61266 ] 00:06:28.083 [2024-12-11 08:40:35.825840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.342 [2024-12-11 08:40:35.860981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.342 [2024-12-11 08:40:35.892123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.342  [2024-12-11T08:40:36.116Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.342 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733906435 )) 00:06:28.342 00:06:28.342 real 0m1.883s 00:06:28.342 user 0m0.450s 00:06:28.342 sys 0m0.380s 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.342 ************************************ 00:06:28.342 END TEST dd_flag_noatime 00:06:28.342 ************************************ 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.342 ************************************ 00:06:28.342 START TEST dd_flags_misc 00:06:28.342 ************************************ 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.342 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:28.601 [2024-12-11 08:40:36.148060] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:28.602 [2024-12-11 08:40:36.148167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:06:28.602 [2024-12-11 08:40:36.289393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.602 [2024-12-11 08:40:36.322446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.602 [2024-12-11 08:40:36.351278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.602  [2024-12-11T08:40:36.634Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.860 00:06:28.860 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2lwpo75m78njblivd69wj8cagx8mjvvihqfh2nnm1x9b62l1z7j72eq6h87y4r17rv9qqwrimwnr93igyoytpb9cm8uw40dn3sf1yyid10zm8uwq8yjf9qt7bbrq81p8foz730hqjawsujcnk466dl1kyj1waqeu6wdginsm3m0zw8o995lzev2ic7ywychq81wovaobek943vcti8j6pt8zsv1e73ne2pxyebxiz64h2o122dtj9bu8re70vqvyjbr9b25qr1nveytbq4dtvzabv2090p5t7uqawtr8k2yp8dyec53eep1oyzhbh89bf6wjee502xq95cxjb9wxbm2q9jmexh9hpjqzhz18zfzlf26bwylrqur83p9oy8l9gjoc5yld04ypyt3gxydmy5kc78ocf1jdg6htnt974ewr2cikcdr3u9327e69dx5ahtqwdj0k0hzi7wjocu5teu216il2zbv37b2rteqilp6zqavgd20q6skvib68svi1 == \2\l\w\p\o\7\5\m\7\8\n\j\b\l\i\v\d\6\9\w\j\8\c\a\g\x\8\m\j\v\v\i\h\q\f\h\2\n\n\m\1\x\9\b\6\2\l\1\z\7\j\7\2\e\q\6\h\8\7\y\4\r\1\7\r\v\9\q\q\w\r\i\m\w\n\r\9\3\i\g\y\o\y\t\p\b\9\c\m\8\u\w\4\0\d\n\3\s\f\1\y\y\i\d\1\0\z\m\8\u\w\q\8\y\j\f\9\q\t\7\b\b\r\q\8\1\p\8\f\o\z\7\3\0\h\q\j\a\w\s\u\j\c\n\k\4\6\6\d\l\1\k\y\j\1\w\a\q\e\u\6\w\d\g\i\n\s\m\3\m\0\z\w\8\o\9\9\5\l\z\e\v\2\i\c\7\y\w\y\c\h\q\8\1\w\o\v\a\o\b\e\k\9\4\3\v\c\t\i\8\j\6\p\t\8\z\s\v\1\e\7\3\n\e\2\p\x\y\e\b\x\i\z\6\4\h\2\o\1\2\2\d\t\j\9\b\u\8\r\e\7\0\v\q\v\y\j\b\r\9\b\2\5\q\r\1\n\v\e\y\t\b\q\4\d\t\v\z\a\b\v\2\0\9\0\p\5\t\7\u\q\a\w\t\r\8\k\2\y\p\8\d\y\e\c\5\3\e\e\p\1\o\y\z\h\b\h\8\9\b\f\6\w\j\e\e\5\0\2\x\q\9\5\c\x\j\b\9\w\x\b\m\2\q\9\j\m\e\x\h\9\h\p\j\q\z\h\z\1\8\z\f\z\l\f\2\6\b\w\y\l\r\q\u\r\8\3\p\9\o\y\8\l\9\g\j\o\c\5\y\l\d\0\4\y\p\y\t\3\g\x\y\d\m\y\5\k\c\7\8\o\c\f\1\j\d\g\6\h\t\n\t\9\7\4\e\w\r\2\c\i\k\c\d\r\3\u\9\3\2\7\e\6\9\d\x\5\a\h\t\q\w\d\j\0\k\0\h\z\i\7\w\j\o\c\u\5\t\e\u\2\1\6\i\l\2\z\b\v\3\7\b\2\r\t\e\q\i\l\p\6\z\q\a\v\g\d\2\0\q\6\s\k\v\i\b\6\8\s\v\i\1 ]] 00:06:28.860 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.860 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:28.860 [2024-12-11 08:40:36.542435] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:28.860 [2024-12-11 08:40:36.542532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61298 ] 00:06:29.119 [2024-12-11 08:40:36.694616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.119 [2024-12-11 08:40:36.727552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.119 [2024-12-11 08:40:36.756342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.119  [2024-12-11T08:40:37.151Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.377 00:06:29.377 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2lwpo75m78njblivd69wj8cagx8mjvvihqfh2nnm1x9b62l1z7j72eq6h87y4r17rv9qqwrimwnr93igyoytpb9cm8uw40dn3sf1yyid10zm8uwq8yjf9qt7bbrq81p8foz730hqjawsujcnk466dl1kyj1waqeu6wdginsm3m0zw8o995lzev2ic7ywychq81wovaobek943vcti8j6pt8zsv1e73ne2pxyebxiz64h2o122dtj9bu8re70vqvyjbr9b25qr1nveytbq4dtvzabv2090p5t7uqawtr8k2yp8dyec53eep1oyzhbh89bf6wjee502xq95cxjb9wxbm2q9jmexh9hpjqzhz18zfzlf26bwylrqur83p9oy8l9gjoc5yld04ypyt3gxydmy5kc78ocf1jdg6htnt974ewr2cikcdr3u9327e69dx5ahtqwdj0k0hzi7wjocu5teu216il2zbv37b2rteqilp6zqavgd20q6skvib68svi1 == \2\l\w\p\o\7\5\m\7\8\n\j\b\l\i\v\d\6\9\w\j\8\c\a\g\x\8\m\j\v\v\i\h\q\f\h\2\n\n\m\1\x\9\b\6\2\l\1\z\7\j\7\2\e\q\6\h\8\7\y\4\r\1\7\r\v\9\q\q\w\r\i\m\w\n\r\9\3\i\g\y\o\y\t\p\b\9\c\m\8\u\w\4\0\d\n\3\s\f\1\y\y\i\d\1\0\z\m\8\u\w\q\8\y\j\f\9\q\t\7\b\b\r\q\8\1\p\8\f\o\z\7\3\0\h\q\j\a\w\s\u\j\c\n\k\4\6\6\d\l\1\k\y\j\1\w\a\q\e\u\6\w\d\g\i\n\s\m\3\m\0\z\w\8\o\9\9\5\l\z\e\v\2\i\c\7\y\w\y\c\h\q\8\1\w\o\v\a\o\b\e\k\9\4\3\v\c\t\i\8\j\6\p\t\8\z\s\v\1\e\7\3\n\e\2\p\x\y\e\b\x\i\z\6\4\h\2\o\1\2\2\d\t\j\9\b\u\8\r\e\7\0\v\q\v\y\j\b\r\9\b\2\5\q\r\1\n\v\e\y\t\b\q\4\d\t\v\z\a\b\v\2\0\9\0\p\5\t\7\u\q\a\w\t\r\8\k\2\y\p\8\d\y\e\c\5\3\e\e\p\1\o\y\z\h\b\h\8\9\b\f\6\w\j\e\e\5\0\2\x\q\9\5\c\x\j\b\9\w\x\b\m\2\q\9\j\m\e\x\h\9\h\p\j\q\z\h\z\1\8\z\f\z\l\f\2\6\b\w\y\l\r\q\u\r\8\3\p\9\o\y\8\l\9\g\j\o\c\5\y\l\d\0\4\y\p\y\t\3\g\x\y\d\m\y\5\k\c\7\8\o\c\f\1\j\d\g\6\h\t\n\t\9\7\4\e\w\r\2\c\i\k\c\d\r\3\u\9\3\2\7\e\6\9\d\x\5\a\h\t\q\w\d\j\0\k\0\h\z\i\7\w\j\o\c\u\5\t\e\u\2\1\6\i\l\2\z\b\v\3\7\b\2\r\t\e\q\i\l\p\6\z\q\a\v\g\d\2\0\q\6\s\k\v\i\b\6\8\s\v\i\1 ]] 00:06:29.377 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.377 08:40:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:29.377 [2024-12-11 08:40:36.951771] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:29.377 [2024-12-11 08:40:36.951858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:06:29.377 [2024-12-11 08:40:37.093169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.377 [2024-12-11 08:40:37.125759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.636 [2024-12-11 08:40:37.154363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.636  [2024-12-11T08:40:37.410Z] Copying: 512/512 [B] (average 125 kBps) 00:06:29.636 00:06:29.636 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2lwpo75m78njblivd69wj8cagx8mjvvihqfh2nnm1x9b62l1z7j72eq6h87y4r17rv9qqwrimwnr93igyoytpb9cm8uw40dn3sf1yyid10zm8uwq8yjf9qt7bbrq81p8foz730hqjawsujcnk466dl1kyj1waqeu6wdginsm3m0zw8o995lzev2ic7ywychq81wovaobek943vcti8j6pt8zsv1e73ne2pxyebxiz64h2o122dtj9bu8re70vqvyjbr9b25qr1nveytbq4dtvzabv2090p5t7uqawtr8k2yp8dyec53eep1oyzhbh89bf6wjee502xq95cxjb9wxbm2q9jmexh9hpjqzhz18zfzlf26bwylrqur83p9oy8l9gjoc5yld04ypyt3gxydmy5kc78ocf1jdg6htnt974ewr2cikcdr3u9327e69dx5ahtqwdj0k0hzi7wjocu5teu216il2zbv37b2rteqilp6zqavgd20q6skvib68svi1 == \2\l\w\p\o\7\5\m\7\8\n\j\b\l\i\v\d\6\9\w\j\8\c\a\g\x\8\m\j\v\v\i\h\q\f\h\2\n\n\m\1\x\9\b\6\2\l\1\z\7\j\7\2\e\q\6\h\8\7\y\4\r\1\7\r\v\9\q\q\w\r\i\m\w\n\r\9\3\i\g\y\o\y\t\p\b\9\c\m\8\u\w\4\0\d\n\3\s\f\1\y\y\i\d\1\0\z\m\8\u\w\q\8\y\j\f\9\q\t\7\b\b\r\q\8\1\p\8\f\o\z\7\3\0\h\q\j\a\w\s\u\j\c\n\k\4\6\6\d\l\1\k\y\j\1\w\a\q\e\u\6\w\d\g\i\n\s\m\3\m\0\z\w\8\o\9\9\5\l\z\e\v\2\i\c\7\y\w\y\c\h\q\8\1\w\o\v\a\o\b\e\k\9\4\3\v\c\t\i\8\j\6\p\t\8\z\s\v\1\e\7\3\n\e\2\p\x\y\e\b\x\i\z\6\4\h\2\o\1\2\2\d\t\j\9\b\u\8\r\e\7\0\v\q\v\y\j\b\r\9\b\2\5\q\r\1\n\v\e\y\t\b\q\4\d\t\v\z\a\b\v\2\0\9\0\p\5\t\7\u\q\a\w\t\r\8\k\2\y\p\8\d\y\e\c\5\3\e\e\p\1\o\y\z\h\b\h\8\9\b\f\6\w\j\e\e\5\0\2\x\q\9\5\c\x\j\b\9\w\x\b\m\2\q\9\j\m\e\x\h\9\h\p\j\q\z\h\z\1\8\z\f\z\l\f\2\6\b\w\y\l\r\q\u\r\8\3\p\9\o\y\8\l\9\g\j\o\c\5\y\l\d\0\4\y\p\y\t\3\g\x\y\d\m\y\5\k\c\7\8\o\c\f\1\j\d\g\6\h\t\n\t\9\7\4\e\w\r\2\c\i\k\c\d\r\3\u\9\3\2\7\e\6\9\d\x\5\a\h\t\q\w\d\j\0\k\0\h\z\i\7\w\j\o\c\u\5\t\e\u\2\1\6\i\l\2\z\b\v\3\7\b\2\r\t\e\q\i\l\p\6\z\q\a\v\g\d\2\0\q\6\s\k\v\i\b\6\8\s\v\i\1 ]] 00:06:29.636 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.636 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:29.636 [2024-12-11 08:40:37.338747] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:29.636 [2024-12-11 08:40:37.338834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:06:29.894 [2024-12-11 08:40:37.481281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.894 [2024-12-11 08:40:37.514747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.894 [2024-12-11 08:40:37.543934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.894  [2024-12-11T08:40:37.927Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.153 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2lwpo75m78njblivd69wj8cagx8mjvvihqfh2nnm1x9b62l1z7j72eq6h87y4r17rv9qqwrimwnr93igyoytpb9cm8uw40dn3sf1yyid10zm8uwq8yjf9qt7bbrq81p8foz730hqjawsujcnk466dl1kyj1waqeu6wdginsm3m0zw8o995lzev2ic7ywychq81wovaobek943vcti8j6pt8zsv1e73ne2pxyebxiz64h2o122dtj9bu8re70vqvyjbr9b25qr1nveytbq4dtvzabv2090p5t7uqawtr8k2yp8dyec53eep1oyzhbh89bf6wjee502xq95cxjb9wxbm2q9jmexh9hpjqzhz18zfzlf26bwylrqur83p9oy8l9gjoc5yld04ypyt3gxydmy5kc78ocf1jdg6htnt974ewr2cikcdr3u9327e69dx5ahtqwdj0k0hzi7wjocu5teu216il2zbv37b2rteqilp6zqavgd20q6skvib68svi1 == \2\l\w\p\o\7\5\m\7\8\n\j\b\l\i\v\d\6\9\w\j\8\c\a\g\x\8\m\j\v\v\i\h\q\f\h\2\n\n\m\1\x\9\b\6\2\l\1\z\7\j\7\2\e\q\6\h\8\7\y\4\r\1\7\r\v\9\q\q\w\r\i\m\w\n\r\9\3\i\g\y\o\y\t\p\b\9\c\m\8\u\w\4\0\d\n\3\s\f\1\y\y\i\d\1\0\z\m\8\u\w\q\8\y\j\f\9\q\t\7\b\b\r\q\8\1\p\8\f\o\z\7\3\0\h\q\j\a\w\s\u\j\c\n\k\4\6\6\d\l\1\k\y\j\1\w\a\q\e\u\6\w\d\g\i\n\s\m\3\m\0\z\w\8\o\9\9\5\l\z\e\v\2\i\c\7\y\w\y\c\h\q\8\1\w\o\v\a\o\b\e\k\9\4\3\v\c\t\i\8\j\6\p\t\8\z\s\v\1\e\7\3\n\e\2\p\x\y\e\b\x\i\z\6\4\h\2\o\1\2\2\d\t\j\9\b\u\8\r\e\7\0\v\q\v\y\j\b\r\9\b\2\5\q\r\1\n\v\e\y\t\b\q\4\d\t\v\z\a\b\v\2\0\9\0\p\5\t\7\u\q\a\w\t\r\8\k\2\y\p\8\d\y\e\c\5\3\e\e\p\1\o\y\z\h\b\h\8\9\b\f\6\w\j\e\e\5\0\2\x\q\9\5\c\x\j\b\9\w\x\b\m\2\q\9\j\m\e\x\h\9\h\p\j\q\z\h\z\1\8\z\f\z\l\f\2\6\b\w\y\l\r\q\u\r\8\3\p\9\o\y\8\l\9\g\j\o\c\5\y\l\d\0\4\y\p\y\t\3\g\x\y\d\m\y\5\k\c\7\8\o\c\f\1\j\d\g\6\h\t\n\t\9\7\4\e\w\r\2\c\i\k\c\d\r\3\u\9\3\2\7\e\6\9\d\x\5\a\h\t\q\w\d\j\0\k\0\h\z\i\7\w\j\o\c\u\5\t\e\u\2\1\6\i\l\2\z\b\v\3\7\b\2\r\t\e\q\i\l\p\6\z\q\a\v\g\d\2\0\q\6\s\k\v\i\b\6\8\s\v\i\1 ]] 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.153 08:40:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:30.153 [2024-12-11 08:40:37.747477] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:30.153 [2024-12-11 08:40:37.747593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61327 ] 00:06:30.153 [2024-12-11 08:40:37.894012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.412 [2024-12-11 08:40:37.926808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.412 [2024-12-11 08:40:37.955602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.412  [2024-12-11T08:40:38.186Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.412 00:06:30.412 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ru73cccsf3wpv16jcyvkm512kfhjnhnybyxuk4wj88x1wivtzqvffzebngl79cd9lxe35gzd0ahnc83yhy2lewoqrk0gguyomdlmjo1byj2kwdkbvdxlpj5mj83k37ob6kucwyvuj3g54scfifpom0nvxrf78292oi66n3imvlk0qu2ezgutdbhhh8bp0uhaaiyb3s18ec5p71gl1imgxdv5u6742qblbrnlsgesz0ustajeio28s7kkipmyn7accb4sdnpr8eihmsp7ri7w97vqfp46lfmq24gb1dfajnyg18cmblbyeag1uedjyij4w9nqjbhzad8ljum5lj40qlzmbrc7kthhbaok69u99lbrgojc2fa1gyby54u5j1fl69ua1sweup3q58w4dteo3j5j80nyb9n3qv58lnfiqh8ik6fb645vx6ayz7tdm4lukggwt937wso73spt8bba9tuzue30ddnjvnobd61slb70bw4jmu9m0yo13jba13ds == \r\u\7\3\c\c\c\s\f\3\w\p\v\1\6\j\c\y\v\k\m\5\1\2\k\f\h\j\n\h\n\y\b\y\x\u\k\4\w\j\8\8\x\1\w\i\v\t\z\q\v\f\f\z\e\b\n\g\l\7\9\c\d\9\l\x\e\3\5\g\z\d\0\a\h\n\c\8\3\y\h\y\2\l\e\w\o\q\r\k\0\g\g\u\y\o\m\d\l\m\j\o\1\b\y\j\2\k\w\d\k\b\v\d\x\l\p\j\5\m\j\8\3\k\3\7\o\b\6\k\u\c\w\y\v\u\j\3\g\5\4\s\c\f\i\f\p\o\m\0\n\v\x\r\f\7\8\2\9\2\o\i\6\6\n\3\i\m\v\l\k\0\q\u\2\e\z\g\u\t\d\b\h\h\h\8\b\p\0\u\h\a\a\i\y\b\3\s\1\8\e\c\5\p\7\1\g\l\1\i\m\g\x\d\v\5\u\6\7\4\2\q\b\l\b\r\n\l\s\g\e\s\z\0\u\s\t\a\j\e\i\o\2\8\s\7\k\k\i\p\m\y\n\7\a\c\c\b\4\s\d\n\p\r\8\e\i\h\m\s\p\7\r\i\7\w\9\7\v\q\f\p\4\6\l\f\m\q\2\4\g\b\1\d\f\a\j\n\y\g\1\8\c\m\b\l\b\y\e\a\g\1\u\e\d\j\y\i\j\4\w\9\n\q\j\b\h\z\a\d\8\l\j\u\m\5\l\j\4\0\q\l\z\m\b\r\c\7\k\t\h\h\b\a\o\k\6\9\u\9\9\l\b\r\g\o\j\c\2\f\a\1\g\y\b\y\5\4\u\5\j\1\f\l\6\9\u\a\1\s\w\e\u\p\3\q\5\8\w\4\d\t\e\o\3\j\5\j\8\0\n\y\b\9\n\3\q\v\5\8\l\n\f\i\q\h\8\i\k\6\f\b\6\4\5\v\x\6\a\y\z\7\t\d\m\4\l\u\k\g\g\w\t\9\3\7\w\s\o\7\3\s\p\t\8\b\b\a\9\t\u\z\u\e\3\0\d\d\n\j\v\n\o\b\d\6\1\s\l\b\7\0\b\w\4\j\m\u\9\m\0\y\o\1\3\j\b\a\1\3\d\s ]] 00:06:30.412 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.412 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:30.412 [2024-12-11 08:40:38.147414] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:30.412 [2024-12-11 08:40:38.147525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:06:30.670 [2024-12-11 08:40:38.294051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.670 [2024-12-11 08:40:38.326799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.670 [2024-12-11 08:40:38.355552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.670  [2024-12-11T08:40:38.702Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.928 00:06:30.928 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ru73cccsf3wpv16jcyvkm512kfhjnhnybyxuk4wj88x1wivtzqvffzebngl79cd9lxe35gzd0ahnc83yhy2lewoqrk0gguyomdlmjo1byj2kwdkbvdxlpj5mj83k37ob6kucwyvuj3g54scfifpom0nvxrf78292oi66n3imvlk0qu2ezgutdbhhh8bp0uhaaiyb3s18ec5p71gl1imgxdv5u6742qblbrnlsgesz0ustajeio28s7kkipmyn7accb4sdnpr8eihmsp7ri7w97vqfp46lfmq24gb1dfajnyg18cmblbyeag1uedjyij4w9nqjbhzad8ljum5lj40qlzmbrc7kthhbaok69u99lbrgojc2fa1gyby54u5j1fl69ua1sweup3q58w4dteo3j5j80nyb9n3qv58lnfiqh8ik6fb645vx6ayz7tdm4lukggwt937wso73spt8bba9tuzue30ddnjvnobd61slb70bw4jmu9m0yo13jba13ds == \r\u\7\3\c\c\c\s\f\3\w\p\v\1\6\j\c\y\v\k\m\5\1\2\k\f\h\j\n\h\n\y\b\y\x\u\k\4\w\j\8\8\x\1\w\i\v\t\z\q\v\f\f\z\e\b\n\g\l\7\9\c\d\9\l\x\e\3\5\g\z\d\0\a\h\n\c\8\3\y\h\y\2\l\e\w\o\q\r\k\0\g\g\u\y\o\m\d\l\m\j\o\1\b\y\j\2\k\w\d\k\b\v\d\x\l\p\j\5\m\j\8\3\k\3\7\o\b\6\k\u\c\w\y\v\u\j\3\g\5\4\s\c\f\i\f\p\o\m\0\n\v\x\r\f\7\8\2\9\2\o\i\6\6\n\3\i\m\v\l\k\0\q\u\2\e\z\g\u\t\d\b\h\h\h\8\b\p\0\u\h\a\a\i\y\b\3\s\1\8\e\c\5\p\7\1\g\l\1\i\m\g\x\d\v\5\u\6\7\4\2\q\b\l\b\r\n\l\s\g\e\s\z\0\u\s\t\a\j\e\i\o\2\8\s\7\k\k\i\p\m\y\n\7\a\c\c\b\4\s\d\n\p\r\8\e\i\h\m\s\p\7\r\i\7\w\9\7\v\q\f\p\4\6\l\f\m\q\2\4\g\b\1\d\f\a\j\n\y\g\1\8\c\m\b\l\b\y\e\a\g\1\u\e\d\j\y\i\j\4\w\9\n\q\j\b\h\z\a\d\8\l\j\u\m\5\l\j\4\0\q\l\z\m\b\r\c\7\k\t\h\h\b\a\o\k\6\9\u\9\9\l\b\r\g\o\j\c\2\f\a\1\g\y\b\y\5\4\u\5\j\1\f\l\6\9\u\a\1\s\w\e\u\p\3\q\5\8\w\4\d\t\e\o\3\j\5\j\8\0\n\y\b\9\n\3\q\v\5\8\l\n\f\i\q\h\8\i\k\6\f\b\6\4\5\v\x\6\a\y\z\7\t\d\m\4\l\u\k\g\g\w\t\9\3\7\w\s\o\7\3\s\p\t\8\b\b\a\9\t\u\z\u\e\3\0\d\d\n\j\v\n\o\b\d\6\1\s\l\b\7\0\b\w\4\j\m\u\9\m\0\y\o\1\3\j\b\a\1\3\d\s ]] 00:06:30.928 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.928 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:30.928 [2024-12-11 08:40:38.551469] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:30.928 [2024-12-11 08:40:38.551595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61337 ] 00:06:30.928 [2024-12-11 08:40:38.699101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.187 [2024-12-11 08:40:38.732589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.187 [2024-12-11 08:40:38.761403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.187  [2024-12-11T08:40:38.961Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.187 00:06:31.187 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ru73cccsf3wpv16jcyvkm512kfhjnhnybyxuk4wj88x1wivtzqvffzebngl79cd9lxe35gzd0ahnc83yhy2lewoqrk0gguyomdlmjo1byj2kwdkbvdxlpj5mj83k37ob6kucwyvuj3g54scfifpom0nvxrf78292oi66n3imvlk0qu2ezgutdbhhh8bp0uhaaiyb3s18ec5p71gl1imgxdv5u6742qblbrnlsgesz0ustajeio28s7kkipmyn7accb4sdnpr8eihmsp7ri7w97vqfp46lfmq24gb1dfajnyg18cmblbyeag1uedjyij4w9nqjbhzad8ljum5lj40qlzmbrc7kthhbaok69u99lbrgojc2fa1gyby54u5j1fl69ua1sweup3q58w4dteo3j5j80nyb9n3qv58lnfiqh8ik6fb645vx6ayz7tdm4lukggwt937wso73spt8bba9tuzue30ddnjvnobd61slb70bw4jmu9m0yo13jba13ds == \r\u\7\3\c\c\c\s\f\3\w\p\v\1\6\j\c\y\v\k\m\5\1\2\k\f\h\j\n\h\n\y\b\y\x\u\k\4\w\j\8\8\x\1\w\i\v\t\z\q\v\f\f\z\e\b\n\g\l\7\9\c\d\9\l\x\e\3\5\g\z\d\0\a\h\n\c\8\3\y\h\y\2\l\e\w\o\q\r\k\0\g\g\u\y\o\m\d\l\m\j\o\1\b\y\j\2\k\w\d\k\b\v\d\x\l\p\j\5\m\j\8\3\k\3\7\o\b\6\k\u\c\w\y\v\u\j\3\g\5\4\s\c\f\i\f\p\o\m\0\n\v\x\r\f\7\8\2\9\2\o\i\6\6\n\3\i\m\v\l\k\0\q\u\2\e\z\g\u\t\d\b\h\h\h\8\b\p\0\u\h\a\a\i\y\b\3\s\1\8\e\c\5\p\7\1\g\l\1\i\m\g\x\d\v\5\u\6\7\4\2\q\b\l\b\r\n\l\s\g\e\s\z\0\u\s\t\a\j\e\i\o\2\8\s\7\k\k\i\p\m\y\n\7\a\c\c\b\4\s\d\n\p\r\8\e\i\h\m\s\p\7\r\i\7\w\9\7\v\q\f\p\4\6\l\f\m\q\2\4\g\b\1\d\f\a\j\n\y\g\1\8\c\m\b\l\b\y\e\a\g\1\u\e\d\j\y\i\j\4\w\9\n\q\j\b\h\z\a\d\8\l\j\u\m\5\l\j\4\0\q\l\z\m\b\r\c\7\k\t\h\h\b\a\o\k\6\9\u\9\9\l\b\r\g\o\j\c\2\f\a\1\g\y\b\y\5\4\u\5\j\1\f\l\6\9\u\a\1\s\w\e\u\p\3\q\5\8\w\4\d\t\e\o\3\j\5\j\8\0\n\y\b\9\n\3\q\v\5\8\l\n\f\i\q\h\8\i\k\6\f\b\6\4\5\v\x\6\a\y\z\7\t\d\m\4\l\u\k\g\g\w\t\9\3\7\w\s\o\7\3\s\p\t\8\b\b\a\9\t\u\z\u\e\3\0\d\d\n\j\v\n\o\b\d\6\1\s\l\b\7\0\b\w\4\j\m\u\9\m\0\y\o\1\3\j\b\a\1\3\d\s ]] 00:06:31.187 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.187 08:40:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:31.187 [2024-12-11 08:40:38.954014] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:31.187 [2024-12-11 08:40:38.954117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61350 ] 00:06:31.445 [2024-12-11 08:40:39.101030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.445 [2024-12-11 08:40:39.133799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.445 [2024-12-11 08:40:39.162681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.445  [2024-12-11T08:40:39.478Z] Copying: 512/512 [B] (average 166 kBps) 00:06:31.704 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ru73cccsf3wpv16jcyvkm512kfhjnhnybyxuk4wj88x1wivtzqvffzebngl79cd9lxe35gzd0ahnc83yhy2lewoqrk0gguyomdlmjo1byj2kwdkbvdxlpj5mj83k37ob6kucwyvuj3g54scfifpom0nvxrf78292oi66n3imvlk0qu2ezgutdbhhh8bp0uhaaiyb3s18ec5p71gl1imgxdv5u6742qblbrnlsgesz0ustajeio28s7kkipmyn7accb4sdnpr8eihmsp7ri7w97vqfp46lfmq24gb1dfajnyg18cmblbyeag1uedjyij4w9nqjbhzad8ljum5lj40qlzmbrc7kthhbaok69u99lbrgojc2fa1gyby54u5j1fl69ua1sweup3q58w4dteo3j5j80nyb9n3qv58lnfiqh8ik6fb645vx6ayz7tdm4lukggwt937wso73spt8bba9tuzue30ddnjvnobd61slb70bw4jmu9m0yo13jba13ds == \r\u\7\3\c\c\c\s\f\3\w\p\v\1\6\j\c\y\v\k\m\5\1\2\k\f\h\j\n\h\n\y\b\y\x\u\k\4\w\j\8\8\x\1\w\i\v\t\z\q\v\f\f\z\e\b\n\g\l\7\9\c\d\9\l\x\e\3\5\g\z\d\0\a\h\n\c\8\3\y\h\y\2\l\e\w\o\q\r\k\0\g\g\u\y\o\m\d\l\m\j\o\1\b\y\j\2\k\w\d\k\b\v\d\x\l\p\j\5\m\j\8\3\k\3\7\o\b\6\k\u\c\w\y\v\u\j\3\g\5\4\s\c\f\i\f\p\o\m\0\n\v\x\r\f\7\8\2\9\2\o\i\6\6\n\3\i\m\v\l\k\0\q\u\2\e\z\g\u\t\d\b\h\h\h\8\b\p\0\u\h\a\a\i\y\b\3\s\1\8\e\c\5\p\7\1\g\l\1\i\m\g\x\d\v\5\u\6\7\4\2\q\b\l\b\r\n\l\s\g\e\s\z\0\u\s\t\a\j\e\i\o\2\8\s\7\k\k\i\p\m\y\n\7\a\c\c\b\4\s\d\n\p\r\8\e\i\h\m\s\p\7\r\i\7\w\9\7\v\q\f\p\4\6\l\f\m\q\2\4\g\b\1\d\f\a\j\n\y\g\1\8\c\m\b\l\b\y\e\a\g\1\u\e\d\j\y\i\j\4\w\9\n\q\j\b\h\z\a\d\8\l\j\u\m\5\l\j\4\0\q\l\z\m\b\r\c\7\k\t\h\h\b\a\o\k\6\9\u\9\9\l\b\r\g\o\j\c\2\f\a\1\g\y\b\y\5\4\u\5\j\1\f\l\6\9\u\a\1\s\w\e\u\p\3\q\5\8\w\4\d\t\e\o\3\j\5\j\8\0\n\y\b\9\n\3\q\v\5\8\l\n\f\i\q\h\8\i\k\6\f\b\6\4\5\v\x\6\a\y\z\7\t\d\m\4\l\u\k\g\g\w\t\9\3\7\w\s\o\7\3\s\p\t\8\b\b\a\9\t\u\z\u\e\3\0\d\d\n\j\v\n\o\b\d\6\1\s\l\b\7\0\b\w\4\j\m\u\9\m\0\y\o\1\3\j\b\a\1\3\d\s ]] 00:06:31.704 00:06:31.704 real 0m3.219s 00:06:31.704 user 0m1.683s 00:06:31.704 sys 0m1.350s 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.704 ************************************ 00:06:31.704 END TEST dd_flags_misc 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:31.704 ************************************ 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:31.704 * Second test run, disabling liburing, forcing AIO 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.704 08:40:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.704 ************************************ 00:06:31.704 START TEST dd_flag_append_forced_aio 00:06:31.705 ************************************ 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=xseju4sauipvqfc4a61opnmpu754xgp6 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=x0cj35rdmn151av00j2fxspanednft36 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s xseju4sauipvqfc4a61opnmpu754xgp6 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s x0cj35rdmn151av00j2fxspanednft36 00:06:31.705 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:31.705 [2024-12-11 08:40:39.408488] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:31.705 [2024-12-11 08:40:39.408593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61373 ] 00:06:31.964 [2024-12-11 08:40:39.552303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.964 [2024-12-11 08:40:39.585338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.964 [2024-12-11 08:40:39.614825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.964  [2024-12-11T08:40:39.996Z] Copying: 32/32 [B] (average 31 kBps) 00:06:32.222 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ x0cj35rdmn151av00j2fxspanednft36xseju4sauipvqfc4a61opnmpu754xgp6 == \x\0\c\j\3\5\r\d\m\n\1\5\1\a\v\0\0\j\2\f\x\s\p\a\n\e\d\n\f\t\3\6\x\s\e\j\u\4\s\a\u\i\p\v\q\f\c\4\a\6\1\o\p\n\m\p\u\7\5\4\x\g\p\6 ]] 00:06:32.223 00:06:32.223 real 0m0.431s 00:06:32.223 user 0m0.220s 00:06:32.223 sys 0m0.090s 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.223 ************************************ 00:06:32.223 END TEST dd_flag_append_forced_aio 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 START TEST dd_flag_directory_forced_aio 00:06:32.223 ************************************ 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.223 08:40:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.223 [2024-12-11 08:40:39.901062] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:32.223 [2024-12-11 08:40:39.901212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61405 ] 00:06:32.481 [2024-12-11 08:40:40.055636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.481 [2024-12-11 08:40:40.088504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.481 [2024-12-11 08:40:40.117443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.481 [2024-12-11 08:40:40.136962] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:32.481 [2024-12-11 08:40:40.137027] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:32.481 [2024-12-11 08:40:40.137048] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.481 [2024-12-11 08:40:40.203066] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.740 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:32.740 [2024-12-11 08:40:40.328433] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:32.740 [2024-12-11 08:40:40.328549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61409 ] 00:06:32.740 [2024-12-11 08:40:40.478047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.740 [2024-12-11 08:40:40.510884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.999 [2024-12-11 08:40:40.539668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.999 [2024-12-11 08:40:40.558800] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:32.999 [2024-12-11 08:40:40.558860] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:32.999 [2024-12-11 08:40:40.558883] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.999 [2024-12-11 08:40:40.622160] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:32.999 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:32.999 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.000 00:06:33.000 real 0m0.846s 00:06:33.000 user 0m0.447s 00:06:33.000 sys 0m0.190s 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.000 ************************************ 00:06:33.000 END TEST dd_flag_directory_forced_aio 00:06:33.000 ************************************ 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.000 ************************************ 00:06:33.000 START TEST dd_flag_nofollow_forced_aio 00:06:33.000 ************************************ 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.000 08:40:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.258 [2024-12-11 08:40:40.798056] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:33.258 [2024-12-11 08:40:40.798154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:06:33.258 [2024-12-11 08:40:40.943636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.258 [2024-12-11 08:40:40.976649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.258 [2024-12-11 08:40:41.005859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.258 [2024-12-11 08:40:41.025295] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:33.258 [2024-12-11 08:40:41.025359] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:33.258 [2024-12-11 08:40:41.025383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.517 [2024-12-11 08:40:41.091637] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.517 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:33.517 [2024-12-11 08:40:41.200777] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:33.517 [2024-12-11 08:40:41.200861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61447 ] 00:06:33.776 [2024-12-11 08:40:41.346944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.776 [2024-12-11 08:40:41.380943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.776 [2024-12-11 08:40:41.409599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.776 [2024-12-11 08:40:41.428898] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:33.776 [2024-12-11 08:40:41.428962] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:33.776 [2024-12-11 08:40:41.428986] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.776 [2024-12-11 08:40:41.493088] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:34.034 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:34.034 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.034 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:34.034 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:34.034 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:34.035 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.035 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:34.035 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:34.035 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.035 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.035 [2024-12-11 08:40:41.615051] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:34.035 [2024-12-11 08:40:41.615175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61449 ] 00:06:34.035 [2024-12-11 08:40:41.763190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.035 [2024-12-11 08:40:41.796108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.359 [2024-12-11 08:40:41.824958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.359  [2024-12-11T08:40:42.133Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.359 00:06:34.359 ************************************ 00:06:34.359 END TEST dd_flag_nofollow_forced_aio 00:06:34.359 ************************************ 00:06:34.359 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ l3cfvu9rt9rtkf2tg9ogmeikoxm3todrd84uxfcb0x7kub9wcez321xcaqm1spoqwp3ms5yy8fkymlkz9e5uwvhrwmnzznqyigyd5mkhwcjj8ee3f2syjmhv4eaocw97u3zczr3v0ehiqewkbs16vd5avtpc7kg1jx4s5nsv8awhguoqghtfipmumhmwi93mzse03urg9no2069dsi7aogaknnhgvywhqhw3sh96y0y55b3jumd3hkzwij9jvq7umd1t1veoopclozzmi1slyztx95yf8ds3mbnyy3qx8cbtw1wvcrpydrls6jd26yg17ty9hqgot0lri8rgfkcvtfdbsrxs04jn5xotiurtan8zv6lg97u2l12g58n0jtuv4xy63dvy4hb9siq1qey60vujkxaxvcb7byiokuaa0nsnatawjs9idstvblp77xt5djryykrkbnz22vh1j5uov48tsczj4860fvavrt3a79msbzvnfvsetdhijo4cz5rp == \l\3\c\f\v\u\9\r\t\9\r\t\k\f\2\t\g\9\o\g\m\e\i\k\o\x\m\3\t\o\d\r\d\8\4\u\x\f\c\b\0\x\7\k\u\b\9\w\c\e\z\3\2\1\x\c\a\q\m\1\s\p\o\q\w\p\3\m\s\5\y\y\8\f\k\y\m\l\k\z\9\e\5\u\w\v\h\r\w\m\n\z\z\n\q\y\i\g\y\d\5\m\k\h\w\c\j\j\8\e\e\3\f\2\s\y\j\m\h\v\4\e\a\o\c\w\9\7\u\3\z\c\z\r\3\v\0\e\h\i\q\e\w\k\b\s\1\6\v\d\5\a\v\t\p\c\7\k\g\1\j\x\4\s\5\n\s\v\8\a\w\h\g\u\o\q\g\h\t\f\i\p\m\u\m\h\m\w\i\9\3\m\z\s\e\0\3\u\r\g\9\n\o\2\0\6\9\d\s\i\7\a\o\g\a\k\n\n\h\g\v\y\w\h\q\h\w\3\s\h\9\6\y\0\y\5\5\b\3\j\u\m\d\3\h\k\z\w\i\j\9\j\v\q\7\u\m\d\1\t\1\v\e\o\o\p\c\l\o\z\z\m\i\1\s\l\y\z\t\x\9\5\y\f\8\d\s\3\m\b\n\y\y\3\q\x\8\c\b\t\w\1\w\v\c\r\p\y\d\r\l\s\6\j\d\2\6\y\g\1\7\t\y\9\h\q\g\o\t\0\l\r\i\8\r\g\f\k\c\v\t\f\d\b\s\r\x\s\0\4\j\n\5\x\o\t\i\u\r\t\a\n\8\z\v\6\l\g\9\7\u\2\l\1\2\g\5\8\n\0\j\t\u\v\4\x\y\6\3\d\v\y\4\h\b\9\s\i\q\1\q\e\y\6\0\v\u\j\k\x\a\x\v\c\b\7\b\y\i\o\k\u\a\a\0\n\s\n\a\t\a\w\j\s\9\i\d\s\t\v\b\l\p\7\7\x\t\5\d\j\r\y\y\k\r\k\b\n\z\2\2\v\h\1\j\5\u\o\v\4\8\t\s\c\z\j\4\8\6\0\f\v\a\v\r\t\3\a\7\9\m\s\b\z\v\n\f\v\s\e\t\d\h\i\j\o\4\c\z\5\r\p ]] 00:06:34.359 00:06:34.359 real 0m1.269s 00:06:34.359 user 0m0.669s 00:06:34.359 sys 0m0.273s 00:06:34.359 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.359 08:40:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.359 ************************************ 00:06:34.359 START TEST dd_flag_noatime_forced_aio 00:06:34.359 ************************************ 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733906441 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733906441 00:06:34.359 08:40:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:35.295 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.553 [2024-12-11 08:40:43.120157] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:35.553 [2024-12-11 08:40:43.120260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61495 ] 00:06:35.553 [2024-12-11 08:40:43.271717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.553 [2024-12-11 08:40:43.310403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.812 [2024-12-11 08:40:43.342516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.812  [2024-12-11T08:40:43.586Z] Copying: 512/512 [B] (average 500 kBps) 00:06:35.812 00:06:35.812 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:35.812 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733906441 )) 00:06:35.812 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.812 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733906441 )) 00:06:35.812 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.812 [2024-12-11 08:40:43.567892] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:35.812 [2024-12-11 08:40:43.568212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61501 ] 00:06:36.071 [2024-12-11 08:40:43.712840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.071 [2024-12-11 08:40:43.745752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.071 [2024-12-11 08:40:43.774617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.071  [2024-12-11T08:40:44.104Z] Copying: 512/512 [B] (average 500 kBps) 00:06:36.330 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733906443 )) 00:06:36.330 00:06:36.330 real 0m1.903s 00:06:36.330 user 0m0.464s 00:06:36.330 sys 0m0.199s 00:06:36.330 ************************************ 00:06:36.330 END TEST dd_flag_noatime_forced_aio 00:06:36.330 ************************************ 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.330 ************************************ 00:06:36.330 START TEST dd_flags_misc_forced_aio 00:06:36.330 ************************************ 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.330 08:40:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.330 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.330 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:36.330 [2024-12-11 08:40:44.056110] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:36.330 [2024-12-11 08:40:44.056439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61533 ] 00:06:36.588 [2024-12-11 08:40:44.204191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.588 [2024-12-11 08:40:44.237459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.588 [2024-12-11 08:40:44.266607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.588  [2024-12-11T08:40:44.621Z] Copying: 512/512 [B] (average 500 kBps) 00:06:36.847 00:06:36.847 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ctrtwjh75sbvje2cw9aayb7y0j416pwbwjojhq704mlzcxg6wwc8mbbqzs79ijq1x3fbde4l6v128154yixop7iu417c1sn92eanefk9bdnji54k4kl77d056yrthm9xs52phsq5zwzd0ianqbnbyvp6j9io55lim9all2c0dogibzi66jyj8y6ce996buzlhfr80gl1pa5f6l2kyozdfq0dcdi5jod9u31s8tq2wj65xg6lqas2ztwm3qhet0imiyl8kjtozaxs07q4vcukraf3u7cjkp0kgq82ugj4ikx8wu4oxl8gsymgc4qcuqtxiks6ndppm2tizl8gbul4ygy3pnkp51d2yyg4m5rvzn8gqiu1a9hcr4k8tw2jftssjk8w9rpv12ju6gu43gagb41on6mow989wtzoxqlmfo3q6bvo7airmksfwvo3qet941ogn5wjwnae6ikphedhiqb6117n9dttnc9usotp9x2mtipncofwi4ss8x47f4bj == \c\t\r\t\w\j\h\7\5\s\b\v\j\e\2\c\w\9\a\a\y\b\7\y\0\j\4\1\6\p\w\b\w\j\o\j\h\q\7\0\4\m\l\z\c\x\g\6\w\w\c\8\m\b\b\q\z\s\7\9\i\j\q\1\x\3\f\b\d\e\4\l\6\v\1\2\8\1\5\4\y\i\x\o\p\7\i\u\4\1\7\c\1\s\n\9\2\e\a\n\e\f\k\9\b\d\n\j\i\5\4\k\4\k\l\7\7\d\0\5\6\y\r\t\h\m\9\x\s\5\2\p\h\s\q\5\z\w\z\d\0\i\a\n\q\b\n\b\y\v\p\6\j\9\i\o\5\5\l\i\m\9\a\l\l\2\c\0\d\o\g\i\b\z\i\6\6\j\y\j\8\y\6\c\e\9\9\6\b\u\z\l\h\f\r\8\0\g\l\1\p\a\5\f\6\l\2\k\y\o\z\d\f\q\0\d\c\d\i\5\j\o\d\9\u\3\1\s\8\t\q\2\w\j\6\5\x\g\6\l\q\a\s\2\z\t\w\m\3\q\h\e\t\0\i\m\i\y\l\8\k\j\t\o\z\a\x\s\0\7\q\4\v\c\u\k\r\a\f\3\u\7\c\j\k\p\0\k\g\q\8\2\u\g\j\4\i\k\x\8\w\u\4\o\x\l\8\g\s\y\m\g\c\4\q\c\u\q\t\x\i\k\s\6\n\d\p\p\m\2\t\i\z\l\8\g\b\u\l\4\y\g\y\3\p\n\k\p\5\1\d\2\y\y\g\4\m\5\r\v\z\n\8\g\q\i\u\1\a\9\h\c\r\4\k\8\t\w\2\j\f\t\s\s\j\k\8\w\9\r\p\v\1\2\j\u\6\g\u\4\3\g\a\g\b\4\1\o\n\6\m\o\w\9\8\9\w\t\z\o\x\q\l\m\f\o\3\q\6\b\v\o\7\a\i\r\m\k\s\f\w\v\o\3\q\e\t\9\4\1\o\g\n\5\w\j\w\n\a\e\6\i\k\p\h\e\d\h\i\q\b\6\1\1\7\n\9\d\t\t\n\c\9\u\s\o\t\p\9\x\2\m\t\i\p\n\c\o\f\w\i\4\s\s\8\x\4\7\f\4\b\j ]] 00:06:36.847 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.847 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:36.847 [2024-12-11 08:40:44.480498] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:36.847 [2024-12-11 08:40:44.480824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61535 ] 00:06:37.105 [2024-12-11 08:40:44.627402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.105 [2024-12-11 08:40:44.661940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.105 [2024-12-11 08:40:44.691857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.105  [2024-12-11T08:40:44.879Z] Copying: 512/512 [B] (average 500 kBps) 00:06:37.105 00:06:37.106 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ctrtwjh75sbvje2cw9aayb7y0j416pwbwjojhq704mlzcxg6wwc8mbbqzs79ijq1x3fbde4l6v128154yixop7iu417c1sn92eanefk9bdnji54k4kl77d056yrthm9xs52phsq5zwzd0ianqbnbyvp6j9io55lim9all2c0dogibzi66jyj8y6ce996buzlhfr80gl1pa5f6l2kyozdfq0dcdi5jod9u31s8tq2wj65xg6lqas2ztwm3qhet0imiyl8kjtozaxs07q4vcukraf3u7cjkp0kgq82ugj4ikx8wu4oxl8gsymgc4qcuqtxiks6ndppm2tizl8gbul4ygy3pnkp51d2yyg4m5rvzn8gqiu1a9hcr4k8tw2jftssjk8w9rpv12ju6gu43gagb41on6mow989wtzoxqlmfo3q6bvo7airmksfwvo3qet941ogn5wjwnae6ikphedhiqb6117n9dttnc9usotp9x2mtipncofwi4ss8x47f4bj == \c\t\r\t\w\j\h\7\5\s\b\v\j\e\2\c\w\9\a\a\y\b\7\y\0\j\4\1\6\p\w\b\w\j\o\j\h\q\7\0\4\m\l\z\c\x\g\6\w\w\c\8\m\b\b\q\z\s\7\9\i\j\q\1\x\3\f\b\d\e\4\l\6\v\1\2\8\1\5\4\y\i\x\o\p\7\i\u\4\1\7\c\1\s\n\9\2\e\a\n\e\f\k\9\b\d\n\j\i\5\4\k\4\k\l\7\7\d\0\5\6\y\r\t\h\m\9\x\s\5\2\p\h\s\q\5\z\w\z\d\0\i\a\n\q\b\n\b\y\v\p\6\j\9\i\o\5\5\l\i\m\9\a\l\l\2\c\0\d\o\g\i\b\z\i\6\6\j\y\j\8\y\6\c\e\9\9\6\b\u\z\l\h\f\r\8\0\g\l\1\p\a\5\f\6\l\2\k\y\o\z\d\f\q\0\d\c\d\i\5\j\o\d\9\u\3\1\s\8\t\q\2\w\j\6\5\x\g\6\l\q\a\s\2\z\t\w\m\3\q\h\e\t\0\i\m\i\y\l\8\k\j\t\o\z\a\x\s\0\7\q\4\v\c\u\k\r\a\f\3\u\7\c\j\k\p\0\k\g\q\8\2\u\g\j\4\i\k\x\8\w\u\4\o\x\l\8\g\s\y\m\g\c\4\q\c\u\q\t\x\i\k\s\6\n\d\p\p\m\2\t\i\z\l\8\g\b\u\l\4\y\g\y\3\p\n\k\p\5\1\d\2\y\y\g\4\m\5\r\v\z\n\8\g\q\i\u\1\a\9\h\c\r\4\k\8\t\w\2\j\f\t\s\s\j\k\8\w\9\r\p\v\1\2\j\u\6\g\u\4\3\g\a\g\b\4\1\o\n\6\m\o\w\9\8\9\w\t\z\o\x\q\l\m\f\o\3\q\6\b\v\o\7\a\i\r\m\k\s\f\w\v\o\3\q\e\t\9\4\1\o\g\n\5\w\j\w\n\a\e\6\i\k\p\h\e\d\h\i\q\b\6\1\1\7\n\9\d\t\t\n\c\9\u\s\o\t\p\9\x\2\m\t\i\p\n\c\o\f\w\i\4\s\s\8\x\4\7\f\4\b\j ]] 00:06:37.106 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.106 08:40:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:37.364 [2024-12-11 08:40:44.909341] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:37.364 [2024-12-11 08:40:44.909436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61548 ] 00:06:37.364 [2024-12-11 08:40:45.050468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.364 [2024-12-11 08:40:45.083583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.364 [2024-12-11 08:40:45.112607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.364  [2024-12-11T08:40:45.397Z] Copying: 512/512 [B] (average 500 kBps) 00:06:37.623 00:06:37.623 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ctrtwjh75sbvje2cw9aayb7y0j416pwbwjojhq704mlzcxg6wwc8mbbqzs79ijq1x3fbde4l6v128154yixop7iu417c1sn92eanefk9bdnji54k4kl77d056yrthm9xs52phsq5zwzd0ianqbnbyvp6j9io55lim9all2c0dogibzi66jyj8y6ce996buzlhfr80gl1pa5f6l2kyozdfq0dcdi5jod9u31s8tq2wj65xg6lqas2ztwm3qhet0imiyl8kjtozaxs07q4vcukraf3u7cjkp0kgq82ugj4ikx8wu4oxl8gsymgc4qcuqtxiks6ndppm2tizl8gbul4ygy3pnkp51d2yyg4m5rvzn8gqiu1a9hcr4k8tw2jftssjk8w9rpv12ju6gu43gagb41on6mow989wtzoxqlmfo3q6bvo7airmksfwvo3qet941ogn5wjwnae6ikphedhiqb6117n9dttnc9usotp9x2mtipncofwi4ss8x47f4bj == \c\t\r\t\w\j\h\7\5\s\b\v\j\e\2\c\w\9\a\a\y\b\7\y\0\j\4\1\6\p\w\b\w\j\o\j\h\q\7\0\4\m\l\z\c\x\g\6\w\w\c\8\m\b\b\q\z\s\7\9\i\j\q\1\x\3\f\b\d\e\4\l\6\v\1\2\8\1\5\4\y\i\x\o\p\7\i\u\4\1\7\c\1\s\n\9\2\e\a\n\e\f\k\9\b\d\n\j\i\5\4\k\4\k\l\7\7\d\0\5\6\y\r\t\h\m\9\x\s\5\2\p\h\s\q\5\z\w\z\d\0\i\a\n\q\b\n\b\y\v\p\6\j\9\i\o\5\5\l\i\m\9\a\l\l\2\c\0\d\o\g\i\b\z\i\6\6\j\y\j\8\y\6\c\e\9\9\6\b\u\z\l\h\f\r\8\0\g\l\1\p\a\5\f\6\l\2\k\y\o\z\d\f\q\0\d\c\d\i\5\j\o\d\9\u\3\1\s\8\t\q\2\w\j\6\5\x\g\6\l\q\a\s\2\z\t\w\m\3\q\h\e\t\0\i\m\i\y\l\8\k\j\t\o\z\a\x\s\0\7\q\4\v\c\u\k\r\a\f\3\u\7\c\j\k\p\0\k\g\q\8\2\u\g\j\4\i\k\x\8\w\u\4\o\x\l\8\g\s\y\m\g\c\4\q\c\u\q\t\x\i\k\s\6\n\d\p\p\m\2\t\i\z\l\8\g\b\u\l\4\y\g\y\3\p\n\k\p\5\1\d\2\y\y\g\4\m\5\r\v\z\n\8\g\q\i\u\1\a\9\h\c\r\4\k\8\t\w\2\j\f\t\s\s\j\k\8\w\9\r\p\v\1\2\j\u\6\g\u\4\3\g\a\g\b\4\1\o\n\6\m\o\w\9\8\9\w\t\z\o\x\q\l\m\f\o\3\q\6\b\v\o\7\a\i\r\m\k\s\f\w\v\o\3\q\e\t\9\4\1\o\g\n\5\w\j\w\n\a\e\6\i\k\p\h\e\d\h\i\q\b\6\1\1\7\n\9\d\t\t\n\c\9\u\s\o\t\p\9\x\2\m\t\i\p\n\c\o\f\w\i\4\s\s\8\x\4\7\f\4\b\j ]] 00:06:37.623 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.623 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:37.623 [2024-12-11 08:40:45.324792] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:37.623 [2024-12-11 08:40:45.325105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61550 ] 00:06:37.881 [2024-12-11 08:40:45.469154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.881 [2024-12-11 08:40:45.502060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.881 [2024-12-11 08:40:45.530892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.881  [2024-12-11T08:40:45.913Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.139 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ctrtwjh75sbvje2cw9aayb7y0j416pwbwjojhq704mlzcxg6wwc8mbbqzs79ijq1x3fbde4l6v128154yixop7iu417c1sn92eanefk9bdnji54k4kl77d056yrthm9xs52phsq5zwzd0ianqbnbyvp6j9io55lim9all2c0dogibzi66jyj8y6ce996buzlhfr80gl1pa5f6l2kyozdfq0dcdi5jod9u31s8tq2wj65xg6lqas2ztwm3qhet0imiyl8kjtozaxs07q4vcukraf3u7cjkp0kgq82ugj4ikx8wu4oxl8gsymgc4qcuqtxiks6ndppm2tizl8gbul4ygy3pnkp51d2yyg4m5rvzn8gqiu1a9hcr4k8tw2jftssjk8w9rpv12ju6gu43gagb41on6mow989wtzoxqlmfo3q6bvo7airmksfwvo3qet941ogn5wjwnae6ikphedhiqb6117n9dttnc9usotp9x2mtipncofwi4ss8x47f4bj == \c\t\r\t\w\j\h\7\5\s\b\v\j\e\2\c\w\9\a\a\y\b\7\y\0\j\4\1\6\p\w\b\w\j\o\j\h\q\7\0\4\m\l\z\c\x\g\6\w\w\c\8\m\b\b\q\z\s\7\9\i\j\q\1\x\3\f\b\d\e\4\l\6\v\1\2\8\1\5\4\y\i\x\o\p\7\i\u\4\1\7\c\1\s\n\9\2\e\a\n\e\f\k\9\b\d\n\j\i\5\4\k\4\k\l\7\7\d\0\5\6\y\r\t\h\m\9\x\s\5\2\p\h\s\q\5\z\w\z\d\0\i\a\n\q\b\n\b\y\v\p\6\j\9\i\o\5\5\l\i\m\9\a\l\l\2\c\0\d\o\g\i\b\z\i\6\6\j\y\j\8\y\6\c\e\9\9\6\b\u\z\l\h\f\r\8\0\g\l\1\p\a\5\f\6\l\2\k\y\o\z\d\f\q\0\d\c\d\i\5\j\o\d\9\u\3\1\s\8\t\q\2\w\j\6\5\x\g\6\l\q\a\s\2\z\t\w\m\3\q\h\e\t\0\i\m\i\y\l\8\k\j\t\o\z\a\x\s\0\7\q\4\v\c\u\k\r\a\f\3\u\7\c\j\k\p\0\k\g\q\8\2\u\g\j\4\i\k\x\8\w\u\4\o\x\l\8\g\s\y\m\g\c\4\q\c\u\q\t\x\i\k\s\6\n\d\p\p\m\2\t\i\z\l\8\g\b\u\l\4\y\g\y\3\p\n\k\p\5\1\d\2\y\y\g\4\m\5\r\v\z\n\8\g\q\i\u\1\a\9\h\c\r\4\k\8\t\w\2\j\f\t\s\s\j\k\8\w\9\r\p\v\1\2\j\u\6\g\u\4\3\g\a\g\b\4\1\o\n\6\m\o\w\9\8\9\w\t\z\o\x\q\l\m\f\o\3\q\6\b\v\o\7\a\i\r\m\k\s\f\w\v\o\3\q\e\t\9\4\1\o\g\n\5\w\j\w\n\a\e\6\i\k\p\h\e\d\h\i\q\b\6\1\1\7\n\9\d\t\t\n\c\9\u\s\o\t\p\9\x\2\m\t\i\p\n\c\o\f\w\i\4\s\s\8\x\4\7\f\4\b\j ]] 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.140 08:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:38.140 [2024-12-11 08:40:45.748039] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:38.140 [2024-12-11 08:40:45.748117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61553 ] 00:06:38.140 [2024-12-11 08:40:45.890653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.398 [2024-12-11 08:40:45.923221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.398 [2024-12-11 08:40:45.951723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.398  [2024-12-11T08:40:46.172Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.398 00:06:38.398 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vft5yks3mcok00jvo3i3t40363nyq3nmgbcc90l6tt2cdzmmi7sy5xgq1r1s96jl401a7ge2e79ttg0pxj89xiywkwo2m1wt3clgaz9xk1tt2dd0t9hpykaks91motlfgd771kfc1h3mg25an85agx4953uuqod2ili8i3py45fdcgkaclujpss6rgcw2oar3wj44q4fujevi9wt08ylu7fesd1zvck9icdxc4zf9k0yxb4xkpm72pfr86v4kex8mkneh1ijfx0t654q8t8chfxto5q3dc27x6038kjrs6gsfufc3p6uey3ntafq9l6x5ijxromgea4uouzrvonzdgop7zp9px7ilso7p53q7alrxrqgjiwkxvlpkntx636w7kgz1v0cxaaf4iwfo3wmvh5ziv46ptsvopazo9xq6ru93kt0o3l7zt4wosk4ginzx1n9arhe34y65x463do8eixuvc7jwttoeyqif88z5b1qtd00wh794hnt2ryisnby == \v\f\t\5\y\k\s\3\m\c\o\k\0\0\j\v\o\3\i\3\t\4\0\3\6\3\n\y\q\3\n\m\g\b\c\c\9\0\l\6\t\t\2\c\d\z\m\m\i\7\s\y\5\x\g\q\1\r\1\s\9\6\j\l\4\0\1\a\7\g\e\2\e\7\9\t\t\g\0\p\x\j\8\9\x\i\y\w\k\w\o\2\m\1\w\t\3\c\l\g\a\z\9\x\k\1\t\t\2\d\d\0\t\9\h\p\y\k\a\k\s\9\1\m\o\t\l\f\g\d\7\7\1\k\f\c\1\h\3\m\g\2\5\a\n\8\5\a\g\x\4\9\5\3\u\u\q\o\d\2\i\l\i\8\i\3\p\y\4\5\f\d\c\g\k\a\c\l\u\j\p\s\s\6\r\g\c\w\2\o\a\r\3\w\j\4\4\q\4\f\u\j\e\v\i\9\w\t\0\8\y\l\u\7\f\e\s\d\1\z\v\c\k\9\i\c\d\x\c\4\z\f\9\k\0\y\x\b\4\x\k\p\m\7\2\p\f\r\8\6\v\4\k\e\x\8\m\k\n\e\h\1\i\j\f\x\0\t\6\5\4\q\8\t\8\c\h\f\x\t\o\5\q\3\d\c\2\7\x\6\0\3\8\k\j\r\s\6\g\s\f\u\f\c\3\p\6\u\e\y\3\n\t\a\f\q\9\l\6\x\5\i\j\x\r\o\m\g\e\a\4\u\o\u\z\r\v\o\n\z\d\g\o\p\7\z\p\9\p\x\7\i\l\s\o\7\p\5\3\q\7\a\l\r\x\r\q\g\j\i\w\k\x\v\l\p\k\n\t\x\6\3\6\w\7\k\g\z\1\v\0\c\x\a\a\f\4\i\w\f\o\3\w\m\v\h\5\z\i\v\4\6\p\t\s\v\o\p\a\z\o\9\x\q\6\r\u\9\3\k\t\0\o\3\l\7\z\t\4\w\o\s\k\4\g\i\n\z\x\1\n\9\a\r\h\e\3\4\y\6\5\x\4\6\3\d\o\8\e\i\x\u\v\c\7\j\w\t\t\o\e\y\q\i\f\8\8\z\5\b\1\q\t\d\0\0\w\h\7\9\4\h\n\t\2\r\y\i\s\n\b\y ]] 00:06:38.398 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.398 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:38.657 [2024-12-11 08:40:46.184355] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:38.657 [2024-12-11 08:40:46.184473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61565 ] 00:06:38.657 [2024-12-11 08:40:46.338109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.657 [2024-12-11 08:40:46.370893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.657 [2024-12-11 08:40:46.399458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.657  [2024-12-11T08:40:46.690Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.916 00:06:38.916 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vft5yks3mcok00jvo3i3t40363nyq3nmgbcc90l6tt2cdzmmi7sy5xgq1r1s96jl401a7ge2e79ttg0pxj89xiywkwo2m1wt3clgaz9xk1tt2dd0t9hpykaks91motlfgd771kfc1h3mg25an85agx4953uuqod2ili8i3py45fdcgkaclujpss6rgcw2oar3wj44q4fujevi9wt08ylu7fesd1zvck9icdxc4zf9k0yxb4xkpm72pfr86v4kex8mkneh1ijfx0t654q8t8chfxto5q3dc27x6038kjrs6gsfufc3p6uey3ntafq9l6x5ijxromgea4uouzrvonzdgop7zp9px7ilso7p53q7alrxrqgjiwkxvlpkntx636w7kgz1v0cxaaf4iwfo3wmvh5ziv46ptsvopazo9xq6ru93kt0o3l7zt4wosk4ginzx1n9arhe34y65x463do8eixuvc7jwttoeyqif88z5b1qtd00wh794hnt2ryisnby == \v\f\t\5\y\k\s\3\m\c\o\k\0\0\j\v\o\3\i\3\t\4\0\3\6\3\n\y\q\3\n\m\g\b\c\c\9\0\l\6\t\t\2\c\d\z\m\m\i\7\s\y\5\x\g\q\1\r\1\s\9\6\j\l\4\0\1\a\7\g\e\2\e\7\9\t\t\g\0\p\x\j\8\9\x\i\y\w\k\w\o\2\m\1\w\t\3\c\l\g\a\z\9\x\k\1\t\t\2\d\d\0\t\9\h\p\y\k\a\k\s\9\1\m\o\t\l\f\g\d\7\7\1\k\f\c\1\h\3\m\g\2\5\a\n\8\5\a\g\x\4\9\5\3\u\u\q\o\d\2\i\l\i\8\i\3\p\y\4\5\f\d\c\g\k\a\c\l\u\j\p\s\s\6\r\g\c\w\2\o\a\r\3\w\j\4\4\q\4\f\u\j\e\v\i\9\w\t\0\8\y\l\u\7\f\e\s\d\1\z\v\c\k\9\i\c\d\x\c\4\z\f\9\k\0\y\x\b\4\x\k\p\m\7\2\p\f\r\8\6\v\4\k\e\x\8\m\k\n\e\h\1\i\j\f\x\0\t\6\5\4\q\8\t\8\c\h\f\x\t\o\5\q\3\d\c\2\7\x\6\0\3\8\k\j\r\s\6\g\s\f\u\f\c\3\p\6\u\e\y\3\n\t\a\f\q\9\l\6\x\5\i\j\x\r\o\m\g\e\a\4\u\o\u\z\r\v\o\n\z\d\g\o\p\7\z\p\9\p\x\7\i\l\s\o\7\p\5\3\q\7\a\l\r\x\r\q\g\j\i\w\k\x\v\l\p\k\n\t\x\6\3\6\w\7\k\g\z\1\v\0\c\x\a\a\f\4\i\w\f\o\3\w\m\v\h\5\z\i\v\4\6\p\t\s\v\o\p\a\z\o\9\x\q\6\r\u\9\3\k\t\0\o\3\l\7\z\t\4\w\o\s\k\4\g\i\n\z\x\1\n\9\a\r\h\e\3\4\y\6\5\x\4\6\3\d\o\8\e\i\x\u\v\c\7\j\w\t\t\o\e\y\q\i\f\8\8\z\5\b\1\q\t\d\0\0\w\h\7\9\4\h\n\t\2\r\y\i\s\n\b\y ]] 00:06:38.916 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.916 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:38.916 [2024-12-11 08:40:46.611928] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:38.916 [2024-12-11 08:40:46.612024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61567 ] 00:06:39.175 [2024-12-11 08:40:46.754256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.175 [2024-12-11 08:40:46.786642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.175 [2024-12-11 08:40:46.814973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.175  [2024-12-11T08:40:47.207Z] Copying: 512/512 [B] (average 500 kBps) 00:06:39.433 00:06:39.433 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vft5yks3mcok00jvo3i3t40363nyq3nmgbcc90l6tt2cdzmmi7sy5xgq1r1s96jl401a7ge2e79ttg0pxj89xiywkwo2m1wt3clgaz9xk1tt2dd0t9hpykaks91motlfgd771kfc1h3mg25an85agx4953uuqod2ili8i3py45fdcgkaclujpss6rgcw2oar3wj44q4fujevi9wt08ylu7fesd1zvck9icdxc4zf9k0yxb4xkpm72pfr86v4kex8mkneh1ijfx0t654q8t8chfxto5q3dc27x6038kjrs6gsfufc3p6uey3ntafq9l6x5ijxromgea4uouzrvonzdgop7zp9px7ilso7p53q7alrxrqgjiwkxvlpkntx636w7kgz1v0cxaaf4iwfo3wmvh5ziv46ptsvopazo9xq6ru93kt0o3l7zt4wosk4ginzx1n9arhe34y65x463do8eixuvc7jwttoeyqif88z5b1qtd00wh794hnt2ryisnby == \v\f\t\5\y\k\s\3\m\c\o\k\0\0\j\v\o\3\i\3\t\4\0\3\6\3\n\y\q\3\n\m\g\b\c\c\9\0\l\6\t\t\2\c\d\z\m\m\i\7\s\y\5\x\g\q\1\r\1\s\9\6\j\l\4\0\1\a\7\g\e\2\e\7\9\t\t\g\0\p\x\j\8\9\x\i\y\w\k\w\o\2\m\1\w\t\3\c\l\g\a\z\9\x\k\1\t\t\2\d\d\0\t\9\h\p\y\k\a\k\s\9\1\m\o\t\l\f\g\d\7\7\1\k\f\c\1\h\3\m\g\2\5\a\n\8\5\a\g\x\4\9\5\3\u\u\q\o\d\2\i\l\i\8\i\3\p\y\4\5\f\d\c\g\k\a\c\l\u\j\p\s\s\6\r\g\c\w\2\o\a\r\3\w\j\4\4\q\4\f\u\j\e\v\i\9\w\t\0\8\y\l\u\7\f\e\s\d\1\z\v\c\k\9\i\c\d\x\c\4\z\f\9\k\0\y\x\b\4\x\k\p\m\7\2\p\f\r\8\6\v\4\k\e\x\8\m\k\n\e\h\1\i\j\f\x\0\t\6\5\4\q\8\t\8\c\h\f\x\t\o\5\q\3\d\c\2\7\x\6\0\3\8\k\j\r\s\6\g\s\f\u\f\c\3\p\6\u\e\y\3\n\t\a\f\q\9\l\6\x\5\i\j\x\r\o\m\g\e\a\4\u\o\u\z\r\v\o\n\z\d\g\o\p\7\z\p\9\p\x\7\i\l\s\o\7\p\5\3\q\7\a\l\r\x\r\q\g\j\i\w\k\x\v\l\p\k\n\t\x\6\3\6\w\7\k\g\z\1\v\0\c\x\a\a\f\4\i\w\f\o\3\w\m\v\h\5\z\i\v\4\6\p\t\s\v\o\p\a\z\o\9\x\q\6\r\u\9\3\k\t\0\o\3\l\7\z\t\4\w\o\s\k\4\g\i\n\z\x\1\n\9\a\r\h\e\3\4\y\6\5\x\4\6\3\d\o\8\e\i\x\u\v\c\7\j\w\t\t\o\e\y\q\i\f\8\8\z\5\b\1\q\t\d\0\0\w\h\7\9\4\h\n\t\2\r\y\i\s\n\b\y ]] 00:06:39.434 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.434 08:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:39.434 [2024-12-11 08:40:47.036518] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:39.434 [2024-12-11 08:40:47.036620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:06:39.434 [2024-12-11 08:40:47.183555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.692 [2024-12-11 08:40:47.216239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.692 [2024-12-11 08:40:47.244763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.692  [2024-12-11T08:40:47.466Z] Copying: 512/512 [B] (average 500 kBps) 00:06:39.692 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vft5yks3mcok00jvo3i3t40363nyq3nmgbcc90l6tt2cdzmmi7sy5xgq1r1s96jl401a7ge2e79ttg0pxj89xiywkwo2m1wt3clgaz9xk1tt2dd0t9hpykaks91motlfgd771kfc1h3mg25an85agx4953uuqod2ili8i3py45fdcgkaclujpss6rgcw2oar3wj44q4fujevi9wt08ylu7fesd1zvck9icdxc4zf9k0yxb4xkpm72pfr86v4kex8mkneh1ijfx0t654q8t8chfxto5q3dc27x6038kjrs6gsfufc3p6uey3ntafq9l6x5ijxromgea4uouzrvonzdgop7zp9px7ilso7p53q7alrxrqgjiwkxvlpkntx636w7kgz1v0cxaaf4iwfo3wmvh5ziv46ptsvopazo9xq6ru93kt0o3l7zt4wosk4ginzx1n9arhe34y65x463do8eixuvc7jwttoeyqif88z5b1qtd00wh794hnt2ryisnby == \v\f\t\5\y\k\s\3\m\c\o\k\0\0\j\v\o\3\i\3\t\4\0\3\6\3\n\y\q\3\n\m\g\b\c\c\9\0\l\6\t\t\2\c\d\z\m\m\i\7\s\y\5\x\g\q\1\r\1\s\9\6\j\l\4\0\1\a\7\g\e\2\e\7\9\t\t\g\0\p\x\j\8\9\x\i\y\w\k\w\o\2\m\1\w\t\3\c\l\g\a\z\9\x\k\1\t\t\2\d\d\0\t\9\h\p\y\k\a\k\s\9\1\m\o\t\l\f\g\d\7\7\1\k\f\c\1\h\3\m\g\2\5\a\n\8\5\a\g\x\4\9\5\3\u\u\q\o\d\2\i\l\i\8\i\3\p\y\4\5\f\d\c\g\k\a\c\l\u\j\p\s\s\6\r\g\c\w\2\o\a\r\3\w\j\4\4\q\4\f\u\j\e\v\i\9\w\t\0\8\y\l\u\7\f\e\s\d\1\z\v\c\k\9\i\c\d\x\c\4\z\f\9\k\0\y\x\b\4\x\k\p\m\7\2\p\f\r\8\6\v\4\k\e\x\8\m\k\n\e\h\1\i\j\f\x\0\t\6\5\4\q\8\t\8\c\h\f\x\t\o\5\q\3\d\c\2\7\x\6\0\3\8\k\j\r\s\6\g\s\f\u\f\c\3\p\6\u\e\y\3\n\t\a\f\q\9\l\6\x\5\i\j\x\r\o\m\g\e\a\4\u\o\u\z\r\v\o\n\z\d\g\o\p\7\z\p\9\p\x\7\i\l\s\o\7\p\5\3\q\7\a\l\r\x\r\q\g\j\i\w\k\x\v\l\p\k\n\t\x\6\3\6\w\7\k\g\z\1\v\0\c\x\a\a\f\4\i\w\f\o\3\w\m\v\h\5\z\i\v\4\6\p\t\s\v\o\p\a\z\o\9\x\q\6\r\u\9\3\k\t\0\o\3\l\7\z\t\4\w\o\s\k\4\g\i\n\z\x\1\n\9\a\r\h\e\3\4\y\6\5\x\4\6\3\d\o\8\e\i\x\u\v\c\7\j\w\t\t\o\e\y\q\i\f\8\8\z\5\b\1\q\t\d\0\0\w\h\7\9\4\h\n\t\2\r\y\i\s\n\b\y ]] 00:06:39.692 00:06:39.692 real 0m3.419s 00:06:39.692 user 0m1.728s 00:06:39.692 sys 0m0.713s 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.692 ************************************ 00:06:39.692 END TEST dd_flags_misc_forced_aio 00:06:39.692 ************************************ 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:39.692 ************************************ 00:06:39.692 END TEST spdk_dd_posix 00:06:39.692 ************************************ 00:06:39.692 00:06:39.692 real 0m16.101s 00:06:39.692 user 0m7.162s 00:06:39.692 sys 0m4.317s 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.692 08:40:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 08:40:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:39.952 08:40:47 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.952 08:40:47 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.952 08:40:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 ************************************ 00:06:39.952 START TEST spdk_dd_malloc 00:06:39.952 ************************************ 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:39.952 * Looking for test storage... 00:06:39.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.952 --rc genhtml_branch_coverage=1 00:06:39.952 --rc genhtml_function_coverage=1 00:06:39.952 --rc genhtml_legend=1 00:06:39.952 --rc geninfo_all_blocks=1 00:06:39.952 --rc geninfo_unexecuted_blocks=1 00:06:39.952 00:06:39.952 ' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.952 --rc genhtml_branch_coverage=1 00:06:39.952 --rc genhtml_function_coverage=1 00:06:39.952 --rc genhtml_legend=1 00:06:39.952 --rc geninfo_all_blocks=1 00:06:39.952 --rc geninfo_unexecuted_blocks=1 00:06:39.952 00:06:39.952 ' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.952 --rc genhtml_branch_coverage=1 00:06:39.952 --rc genhtml_function_coverage=1 00:06:39.952 --rc genhtml_legend=1 00:06:39.952 --rc geninfo_all_blocks=1 00:06:39.952 --rc geninfo_unexecuted_blocks=1 00:06:39.952 00:06:39.952 ' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.952 --rc genhtml_branch_coverage=1 00:06:39.952 --rc genhtml_function_coverage=1 00:06:39.952 --rc genhtml_legend=1 00:06:39.952 --rc geninfo_all_blocks=1 00:06:39.952 --rc geninfo_unexecuted_blocks=1 00:06:39.952 00:06:39.952 ' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 ************************************ 00:06:39.952 START TEST dd_malloc_copy 00:06:39.952 ************************************ 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:39.952 08:40:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.211 { 00:06:40.211 "subsystems": [ 00:06:40.211 { 00:06:40.211 "subsystem": "bdev", 00:06:40.211 "config": [ 00:06:40.211 { 00:06:40.211 "params": { 00:06:40.211 "block_size": 512, 00:06:40.211 "num_blocks": 1048576, 00:06:40.211 "name": "malloc0" 00:06:40.211 }, 00:06:40.211 "method": "bdev_malloc_create" 00:06:40.211 }, 00:06:40.211 { 00:06:40.211 "params": { 00:06:40.211 "block_size": 512, 00:06:40.211 "num_blocks": 1048576, 00:06:40.211 "name": "malloc1" 00:06:40.211 }, 00:06:40.211 "method": "bdev_malloc_create" 00:06:40.211 }, 00:06:40.211 { 00:06:40.211 "method": "bdev_wait_for_examine" 00:06:40.211 } 00:06:40.211 ] 00:06:40.211 } 00:06:40.211 ] 00:06:40.211 } 00:06:40.211 [2024-12-11 08:40:47.756834] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:40.211 [2024-12-11 08:40:47.756986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61651 ] 00:06:40.211 [2024-12-11 08:40:47.911518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.211 [2024-12-11 08:40:47.944158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.211 [2024-12-11 08:40:47.973753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.587  [2024-12-11T08:40:50.296Z] Copying: 192/512 [MB] (192 MBps) [2024-12-11T08:40:50.862Z] Copying: 385/512 [MB] (192 MBps) [2024-12-11T08:40:51.429Z] Copying: 512/512 [MB] (average 192 MBps) 00:06:43.655 00:06:43.655 08:40:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:43.655 08:40:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:43.655 08:40:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:43.655 08:40:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.655 { 00:06:43.655 "subsystems": [ 00:06:43.655 { 00:06:43.655 "subsystem": "bdev", 00:06:43.655 "config": [ 00:06:43.655 { 00:06:43.655 "params": { 00:06:43.655 "block_size": 512, 00:06:43.655 "num_blocks": 1048576, 00:06:43.655 "name": "malloc0" 00:06:43.655 }, 00:06:43.655 "method": "bdev_malloc_create" 00:06:43.655 }, 00:06:43.655 { 00:06:43.655 "params": { 00:06:43.655 "block_size": 512, 00:06:43.655 "num_blocks": 1048576, 00:06:43.655 "name": "malloc1" 00:06:43.655 }, 00:06:43.655 "method": "bdev_malloc_create" 00:06:43.655 }, 00:06:43.655 { 00:06:43.655 "method": "bdev_wait_for_examine" 00:06:43.655 } 00:06:43.655 ] 00:06:43.655 } 00:06:43.655 ] 00:06:43.655 } 00:06:43.655 [2024-12-11 08:40:51.201192] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:43.655 [2024-12-11 08:40:51.201313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61706 ] 00:06:43.655 [2024-12-11 08:40:51.359230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.655 [2024-12-11 08:40:51.399737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.913 [2024-12-11 08:40:51.433477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.289  [2024-12-11T08:40:53.998Z] Copying: 187/512 [MB] (187 MBps) [2024-12-11T08:40:54.564Z] Copying: 378/512 [MB] (191 MBps) [2024-12-11T08:40:54.823Z] Copying: 512/512 [MB] (average 189 MBps) 00:06:47.049 00:06:47.049 00:06:47.049 real 0m6.991s 00:06:47.049 user 0m6.320s 00:06:47.049 sys 0m0.509s 00:06:47.049 08:40:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.049 08:40:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.049 ************************************ 00:06:47.049 END TEST dd_malloc_copy 00:06:47.049 ************************************ 00:06:47.049 ************************************ 00:06:47.049 END TEST spdk_dd_malloc 00:06:47.049 ************************************ 00:06:47.049 00:06:47.049 real 0m7.213s 00:06:47.049 user 0m6.443s 00:06:47.049 sys 0m0.608s 00:06:47.049 08:40:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.049 08:40:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:47.049 08:40:54 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:47.049 08:40:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:47.049 08:40:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.049 08:40:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:47.049 ************************************ 00:06:47.049 START TEST spdk_dd_bdev_to_bdev 00:06:47.049 ************************************ 00:06:47.049 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:47.308 * Looking for test storage... 00:06:47.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.308 --rc genhtml_branch_coverage=1 00:06:47.308 --rc genhtml_function_coverage=1 00:06:47.308 --rc genhtml_legend=1 00:06:47.308 --rc geninfo_all_blocks=1 00:06:47.308 --rc geninfo_unexecuted_blocks=1 00:06:47.308 00:06:47.308 ' 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.308 --rc genhtml_branch_coverage=1 00:06:47.308 --rc genhtml_function_coverage=1 00:06:47.308 --rc genhtml_legend=1 00:06:47.308 --rc geninfo_all_blocks=1 00:06:47.308 --rc geninfo_unexecuted_blocks=1 00:06:47.308 00:06:47.308 ' 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.308 --rc genhtml_branch_coverage=1 00:06:47.308 --rc genhtml_function_coverage=1 00:06:47.308 --rc genhtml_legend=1 00:06:47.308 --rc geninfo_all_blocks=1 00:06:47.308 --rc geninfo_unexecuted_blocks=1 00:06:47.308 00:06:47.308 ' 00:06:47.308 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.308 --rc genhtml_branch_coverage=1 00:06:47.308 --rc genhtml_function_coverage=1 00:06:47.308 --rc genhtml_legend=1 00:06:47.308 --rc geninfo_all_blocks=1 00:06:47.308 --rc geninfo_unexecuted_blocks=1 00:06:47.308 00:06:47.308 ' 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:47.309 ************************************ 00:06:47.309 START TEST dd_inflate_file 00:06:47.309 ************************************ 00:06:47.309 08:40:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:47.309 [2024-12-11 08:40:55.018011] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:47.309 [2024-12-11 08:40:55.018100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:06:47.566 [2024-12-11 08:40:55.158992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.566 [2024-12-11 08:40:55.192652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.566 [2024-12-11 08:40:55.222689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.566  [2024-12-11T08:40:55.598Z] Copying: 64/64 [MB] (average 1422 MBps) 00:06:47.824 00:06:47.824 00:06:47.824 real 0m0.446s 00:06:47.824 user 0m0.254s 00:06:47.824 sys 0m0.227s 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:47.824 ************************************ 00:06:47.824 END TEST dd_inflate_file 00:06:47.824 ************************************ 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:47.824 ************************************ 00:06:47.824 START TEST dd_copy_to_out_bdev 00:06:47.824 ************************************ 00:06:47.824 08:40:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:47.824 { 00:06:47.824 "subsystems": [ 00:06:47.824 { 00:06:47.824 "subsystem": "bdev", 00:06:47.824 "config": [ 00:06:47.824 { 00:06:47.824 "params": { 00:06:47.824 "trtype": "pcie", 00:06:47.824 "traddr": "0000:00:10.0", 00:06:47.824 "name": "Nvme0" 00:06:47.824 }, 00:06:47.824 "method": "bdev_nvme_attach_controller" 00:06:47.824 }, 00:06:47.824 { 00:06:47.824 "params": { 00:06:47.824 "trtype": "pcie", 00:06:47.824 "traddr": "0000:00:11.0", 00:06:47.824 "name": "Nvme1" 00:06:47.824 }, 00:06:47.824 "method": "bdev_nvme_attach_controller" 00:06:47.824 }, 00:06:47.824 { 00:06:47.824 "method": "bdev_wait_for_examine" 00:06:47.824 } 00:06:47.824 ] 00:06:47.824 } 00:06:47.824 ] 00:06:47.824 } 00:06:47.824 [2024-12-11 08:40:55.525766] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:47.824 [2024-12-11 08:40:55.525855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61853 ] 00:06:48.082 [2024-12-11 08:40:55.671430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.082 [2024-12-11 08:40:55.705032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.082 [2024-12-11 08:40:55.735929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.458  [2024-12-11T08:40:57.232Z] Copying: 58/64 [MB] (58 MBps) [2024-12-11T08:40:57.232Z] Copying: 64/64 [MB] (average 59 MBps) 00:06:49.458 00:06:49.458 00:06:49.458 real 0m1.679s 00:06:49.458 user 0m1.507s 00:06:49.458 sys 0m1.320s 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:49.458 ************************************ 00:06:49.458 END TEST dd_copy_to_out_bdev 00:06:49.458 ************************************ 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:49.458 ************************************ 00:06:49.458 START TEST dd_offset_magic 00:06:49.458 ************************************ 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:49.458 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:49.718 [2024-12-11 08:40:57.247590] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:49.718 [2024-12-11 08:40:57.247682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61893 ] 00:06:49.718 { 00:06:49.718 "subsystems": [ 00:06:49.718 { 00:06:49.718 "subsystem": "bdev", 00:06:49.718 "config": [ 00:06:49.718 { 00:06:49.718 "params": { 00:06:49.718 "trtype": "pcie", 00:06:49.718 "traddr": "0000:00:10.0", 00:06:49.718 "name": "Nvme0" 00:06:49.718 }, 00:06:49.718 "method": "bdev_nvme_attach_controller" 00:06:49.718 }, 00:06:49.718 { 00:06:49.718 "params": { 00:06:49.718 "trtype": "pcie", 00:06:49.718 "traddr": "0000:00:11.0", 00:06:49.718 "name": "Nvme1" 00:06:49.718 }, 00:06:49.718 "method": "bdev_nvme_attach_controller" 00:06:49.718 }, 00:06:49.718 { 00:06:49.718 "method": "bdev_wait_for_examine" 00:06:49.718 } 00:06:49.718 ] 00:06:49.718 } 00:06:49.718 ] 00:06:49.718 } 00:06:49.718 [2024-12-11 08:40:57.392227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.718 [2024-12-11 08:40:57.425208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.718 [2024-12-11 08:40:57.454664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.983  [2024-12-11T08:40:58.016Z] Copying: 65/65 [MB] (average 1065 MBps) 00:06:50.242 00:06:50.242 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:50.242 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:50.242 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:50.242 08:40:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:50.242 { 00:06:50.242 "subsystems": [ 00:06:50.242 { 00:06:50.242 "subsystem": "bdev", 00:06:50.242 "config": [ 00:06:50.242 { 00:06:50.242 "params": { 00:06:50.242 "trtype": "pcie", 00:06:50.242 "traddr": "0000:00:10.0", 00:06:50.242 "name": "Nvme0" 00:06:50.242 }, 00:06:50.242 "method": "bdev_nvme_attach_controller" 00:06:50.242 }, 00:06:50.242 { 00:06:50.242 "params": { 00:06:50.242 "trtype": "pcie", 00:06:50.242 "traddr": "0000:00:11.0", 00:06:50.242 "name": "Nvme1" 00:06:50.242 }, 00:06:50.242 "method": "bdev_nvme_attach_controller" 00:06:50.242 }, 00:06:50.242 { 00:06:50.242 "method": "bdev_wait_for_examine" 00:06:50.242 } 00:06:50.242 ] 00:06:50.242 } 00:06:50.242 ] 00:06:50.242 } 00:06:50.242 [2024-12-11 08:40:57.903238] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:50.242 [2024-12-11 08:40:57.903337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61907 ] 00:06:50.500 [2024-12-11 08:40:58.055259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.500 [2024-12-11 08:40:58.093935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.500 [2024-12-11 08:40:58.126342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.758  [2024-12-11T08:40:58.532Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:50.758 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:50.758 08:40:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:50.758 [2024-12-11 08:40:58.482155] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:50.758 [2024-12-11 08:40:58.482258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ] 00:06:50.758 { 00:06:50.758 "subsystems": [ 00:06:50.758 { 00:06:50.758 "subsystem": "bdev", 00:06:50.758 "config": [ 00:06:50.758 { 00:06:50.758 "params": { 00:06:50.758 "trtype": "pcie", 00:06:50.758 "traddr": "0000:00:10.0", 00:06:50.758 "name": "Nvme0" 00:06:50.758 }, 00:06:50.758 "method": "bdev_nvme_attach_controller" 00:06:50.758 }, 00:06:50.758 { 00:06:50.758 "params": { 00:06:50.758 "trtype": "pcie", 00:06:50.758 "traddr": "0000:00:11.0", 00:06:50.758 "name": "Nvme1" 00:06:50.758 }, 00:06:50.759 "method": "bdev_nvme_attach_controller" 00:06:50.759 }, 00:06:50.759 { 00:06:50.759 "method": "bdev_wait_for_examine" 00:06:50.759 } 00:06:50.759 ] 00:06:50.759 } 00:06:50.759 ] 00:06:50.759 } 00:06:51.017 [2024-12-11 08:40:58.631196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.017 [2024-12-11 08:40:58.663836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.017 [2024-12-11 08:40:58.693128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.275  [2024-12-11T08:40:59.307Z] Copying: 65/65 [MB] (average 1203 MBps) 00:06:51.533 00:06:51.533 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:51.533 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:51.533 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:51.533 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:51.533 [2024-12-11 08:40:59.123436] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:51.533 [2024-12-11 08:40:59.123531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61944 ] 00:06:51.533 { 00:06:51.533 "subsystems": [ 00:06:51.533 { 00:06:51.533 "subsystem": "bdev", 00:06:51.533 "config": [ 00:06:51.533 { 00:06:51.533 "params": { 00:06:51.533 "trtype": "pcie", 00:06:51.533 "traddr": "0000:00:10.0", 00:06:51.533 "name": "Nvme0" 00:06:51.533 }, 00:06:51.533 "method": "bdev_nvme_attach_controller" 00:06:51.533 }, 00:06:51.533 { 00:06:51.533 "params": { 00:06:51.533 "trtype": "pcie", 00:06:51.533 "traddr": "0000:00:11.0", 00:06:51.533 "name": "Nvme1" 00:06:51.533 }, 00:06:51.533 "method": "bdev_nvme_attach_controller" 00:06:51.533 }, 00:06:51.533 { 00:06:51.534 "method": "bdev_wait_for_examine" 00:06:51.534 } 00:06:51.534 ] 00:06:51.534 } 00:06:51.534 ] 00:06:51.534 } 00:06:51.534 [2024-12-11 08:40:59.266273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.534 [2024-12-11 08:40:59.298902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.792 [2024-12-11 08:40:59.328102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.792  [2024-12-11T08:40:59.825Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:52.051 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:52.051 00:06:52.051 real 0m2.421s 00:06:52.051 user 0m1.807s 00:06:52.051 sys 0m0.601s 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:52.051 ************************************ 00:06:52.051 END TEST dd_offset_magic 00:06:52.051 ************************************ 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:52.051 08:40:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.051 [2024-12-11 08:40:59.715449] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:52.051 [2024-12-11 08:40:59.715551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61975 ] 00:06:52.051 { 00:06:52.051 "subsystems": [ 00:06:52.051 { 00:06:52.051 "subsystem": "bdev", 00:06:52.051 "config": [ 00:06:52.051 { 00:06:52.051 "params": { 00:06:52.051 "trtype": "pcie", 00:06:52.051 "traddr": "0000:00:10.0", 00:06:52.051 "name": "Nvme0" 00:06:52.051 }, 00:06:52.051 "method": "bdev_nvme_attach_controller" 00:06:52.051 }, 00:06:52.051 { 00:06:52.051 "params": { 00:06:52.051 "trtype": "pcie", 00:06:52.051 "traddr": "0000:00:11.0", 00:06:52.051 "name": "Nvme1" 00:06:52.051 }, 00:06:52.051 "method": "bdev_nvme_attach_controller" 00:06:52.051 }, 00:06:52.051 { 00:06:52.051 "method": "bdev_wait_for_examine" 00:06:52.051 } 00:06:52.051 ] 00:06:52.051 } 00:06:52.051 ] 00:06:52.051 } 00:06:52.310 [2024-12-11 08:40:59.861793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.310 [2024-12-11 08:40:59.894402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.310 [2024-12-11 08:40:59.923511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.568  [2024-12-11T08:41:00.342Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:52.568 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:52.568 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.568 [2024-12-11 08:41:00.260948] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:52.568 [2024-12-11 08:41:00.261037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61991 ] 00:06:52.568 { 00:06:52.568 "subsystems": [ 00:06:52.568 { 00:06:52.568 "subsystem": "bdev", 00:06:52.568 "config": [ 00:06:52.568 { 00:06:52.568 "params": { 00:06:52.568 "trtype": "pcie", 00:06:52.568 "traddr": "0000:00:10.0", 00:06:52.568 "name": "Nvme0" 00:06:52.568 }, 00:06:52.568 "method": "bdev_nvme_attach_controller" 00:06:52.568 }, 00:06:52.568 { 00:06:52.568 "params": { 00:06:52.568 "trtype": "pcie", 00:06:52.568 "traddr": "0000:00:11.0", 00:06:52.568 "name": "Nvme1" 00:06:52.568 }, 00:06:52.568 "method": "bdev_nvme_attach_controller" 00:06:52.568 }, 00:06:52.568 { 00:06:52.568 "method": "bdev_wait_for_examine" 00:06:52.568 } 00:06:52.568 ] 00:06:52.568 } 00:06:52.568 ] 00:06:52.568 } 00:06:52.826 [2024-12-11 08:41:00.401113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.826 [2024-12-11 08:41:00.433873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.826 [2024-12-11 08:41:00.463054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.084  [2024-12-11T08:41:00.858Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:53.084 00:06:53.084 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:53.084 ************************************ 00:06:53.084 END TEST spdk_dd_bdev_to_bdev 00:06:53.084 ************************************ 00:06:53.084 00:06:53.084 real 0m6.004s 00:06:53.084 user 0m4.533s 00:06:53.084 sys 0m2.674s 00:06:53.084 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.084 08:41:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:53.084 08:41:00 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:53.084 08:41:00 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:53.084 08:41:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.084 08:41:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.084 08:41:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:53.084 ************************************ 00:06:53.084 START TEST spdk_dd_uring 00:06:53.084 ************************************ 00:06:53.084 08:41:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:53.343 * Looking for test storage... 00:06:53.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:53.343 08:41:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.343 08:41:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.343 08:41:00 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.343 --rc genhtml_branch_coverage=1 00:06:53.343 --rc genhtml_function_coverage=1 00:06:53.343 --rc genhtml_legend=1 00:06:53.343 --rc geninfo_all_blocks=1 00:06:53.343 --rc geninfo_unexecuted_blocks=1 00:06:53.343 00:06:53.343 ' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.343 --rc genhtml_branch_coverage=1 00:06:53.343 --rc genhtml_function_coverage=1 00:06:53.343 --rc genhtml_legend=1 00:06:53.343 --rc geninfo_all_blocks=1 00:06:53.343 --rc geninfo_unexecuted_blocks=1 00:06:53.343 00:06:53.343 ' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.343 --rc genhtml_branch_coverage=1 00:06:53.343 --rc genhtml_function_coverage=1 00:06:53.343 --rc genhtml_legend=1 00:06:53.343 --rc geninfo_all_blocks=1 00:06:53.343 --rc geninfo_unexecuted_blocks=1 00:06:53.343 00:06:53.343 ' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.343 --rc genhtml_branch_coverage=1 00:06:53.343 --rc genhtml_function_coverage=1 00:06:53.343 --rc genhtml_legend=1 00:06:53.343 --rc geninfo_all_blocks=1 00:06:53.343 --rc geninfo_unexecuted_blocks=1 00:06:53.343 00:06:53.343 ' 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.343 08:41:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:53.344 ************************************ 00:06:53.344 START TEST dd_uring_copy 00:06:53.344 ************************************ 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=wz4tdrs9zblmj9a4xtk4t6pnr7rbdds1j59em1ubg1fgc50ctai39csqtp9dxmdgl7jyg99mq4kcjvuheo1nf3nvj8668qma5wjm559spkofrw72n2gsbc6it0t31cwyhyna0hm32be1cza9tkl4gw1psqul40lj985gfdl5izdobx91nkq893tojqp4wj9oecm2ci5yyzps0nvpwb9l4mn6o524n6fucp0o110j5d0kjasut9isbmfsievk4tm95aprn18cov8qb5ufp1qo2t7gc2ege5thsfqo8cvdxfaeveg16e29j1fqgal9qo5awhxzjnfx1z6rmbnrzzagw0xo0i85te9br3yx1qzlatcjq4uwr9hjr04b9xxj2txsv86lr9mvllqnr19fm4eeiwdcql40bztresamyuwkda5w6acp7y28g510hy3uv6m78fwy5amzknne2nzzc3ph5uqylypxtrcludb8cohc11rxrabbh50j2yuv54eou86a5yjvd1jt0suvacain01jr7t94n2xwcyycnvc66g88p0oh1m77ap56s9kbquxfu5k7zpivq0g8j5n7pfxj983z6jh3g3kp1geepejzj736zmlfsonqj4slktjnc967rtvioq7e6yhx7cc4kbdeo5pdnr2pyo81k90cbrlxjwv7lbe8056sggz7kao2mac1r91oe7gwjsga23xkn289p1zyapa3dn4hh4t6tknbtkqiyvju8derj1d8a3m1ennhcq4vqkh93dbx5k4lips5nq3kipmugunf660n2yrpzdry4gaeqperokaxg094v9mmnru8fhgliu48zer5aok4iw8f1xxmfd7inytlwdy01yrjnvl8390aqzkjwyqm41n6fxjzvnft4vksnctz2bugx5hp4hh90ag5s2z8lpt40pkhc9g64xxckv8xh0vi4xu5q03jdvrgyztvmheb2bqho3m6ikkghfbnbeknc4lkgpb13s6w05t3wm2x5xr5672v6td 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo wz4tdrs9zblmj9a4xtk4t6pnr7rbdds1j59em1ubg1fgc50ctai39csqtp9dxmdgl7jyg99mq4kcjvuheo1nf3nvj8668qma5wjm559spkofrw72n2gsbc6it0t31cwyhyna0hm32be1cza9tkl4gw1psqul40lj985gfdl5izdobx91nkq893tojqp4wj9oecm2ci5yyzps0nvpwb9l4mn6o524n6fucp0o110j5d0kjasut9isbmfsievk4tm95aprn18cov8qb5ufp1qo2t7gc2ege5thsfqo8cvdxfaeveg16e29j1fqgal9qo5awhxzjnfx1z6rmbnrzzagw0xo0i85te9br3yx1qzlatcjq4uwr9hjr04b9xxj2txsv86lr9mvllqnr19fm4eeiwdcql40bztresamyuwkda5w6acp7y28g510hy3uv6m78fwy5amzknne2nzzc3ph5uqylypxtrcludb8cohc11rxrabbh50j2yuv54eou86a5yjvd1jt0suvacain01jr7t94n2xwcyycnvc66g88p0oh1m77ap56s9kbquxfu5k7zpivq0g8j5n7pfxj983z6jh3g3kp1geepejzj736zmlfsonqj4slktjnc967rtvioq7e6yhx7cc4kbdeo5pdnr2pyo81k90cbrlxjwv7lbe8056sggz7kao2mac1r91oe7gwjsga23xkn289p1zyapa3dn4hh4t6tknbtkqiyvju8derj1d8a3m1ennhcq4vqkh93dbx5k4lips5nq3kipmugunf660n2yrpzdry4gaeqperokaxg094v9mmnru8fhgliu48zer5aok4iw8f1xxmfd7inytlwdy01yrjnvl8390aqzkjwyqm41n6fxjzvnft4vksnctz2bugx5hp4hh90ag5s2z8lpt40pkhc9g64xxckv8xh0vi4xu5q03jdvrgyztvmheb2bqho3m6ikkghfbnbeknc4lkgpb13s6w05t3wm2x5xr5672v6td 00:06:53.344 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:53.602 [2024-12-11 08:41:01.121536] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:53.602 [2024-12-11 08:41:01.121631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:06:53.602 [2024-12-11 08:41:01.270794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.602 [2024-12-11 08:41:01.309078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.602 [2024-12-11 08:41:01.340872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.169  [2024-12-11T08:41:02.201Z] Copying: 511/511 [MB] (average 1896 MBps) 00:06:54.427 00:06:54.427 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:54.427 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:54.427 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:54.427 08:41:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.427 [2024-12-11 08:41:02.027518] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:54.427 [2024-12-11 08:41:02.027611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:06:54.427 { 00:06:54.427 "subsystems": [ 00:06:54.427 { 00:06:54.427 "subsystem": "bdev", 00:06:54.427 "config": [ 00:06:54.427 { 00:06:54.427 "params": { 00:06:54.427 "block_size": 512, 00:06:54.427 "num_blocks": 1048576, 00:06:54.427 "name": "malloc0" 00:06:54.427 }, 00:06:54.427 "method": "bdev_malloc_create" 00:06:54.427 }, 00:06:54.427 { 00:06:54.427 "params": { 00:06:54.427 "filename": "/dev/zram1", 00:06:54.427 "name": "uring0" 00:06:54.428 }, 00:06:54.428 "method": "bdev_uring_create" 00:06:54.428 }, 00:06:54.428 { 00:06:54.428 "method": "bdev_wait_for_examine" 00:06:54.428 } 00:06:54.428 ] 00:06:54.428 } 00:06:54.428 ] 00:06:54.428 } 00:06:54.428 [2024-12-11 08:41:02.171613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.686 [2024-12-11 08:41:02.204519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.686 [2024-12-11 08:41:02.234089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.620  [2024-12-11T08:41:04.768Z] Copying: 213/512 [MB] (213 MBps) [2024-12-11T08:41:04.768Z] Copying: 427/512 [MB] (213 MBps) [2024-12-11T08:41:05.026Z] Copying: 512/512 [MB] (average 212 MBps) 00:06:57.252 00:06:57.252 08:41:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:57.252 08:41:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:57.252 08:41:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.252 08:41:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.508 [2024-12-11 08:41:05.045375] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:06:57.508 [2024-12-11 08:41:05.045991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:06:57.508 { 00:06:57.508 "subsystems": [ 00:06:57.508 { 00:06:57.508 "subsystem": "bdev", 00:06:57.508 "config": [ 00:06:57.508 { 00:06:57.508 "params": { 00:06:57.508 "block_size": 512, 00:06:57.508 "num_blocks": 1048576, 00:06:57.508 "name": "malloc0" 00:06:57.508 }, 00:06:57.508 "method": "bdev_malloc_create" 00:06:57.508 }, 00:06:57.508 { 00:06:57.508 "params": { 00:06:57.508 "filename": "/dev/zram1", 00:06:57.508 "name": "uring0" 00:06:57.508 }, 00:06:57.508 "method": "bdev_uring_create" 00:06:57.508 }, 00:06:57.508 { 00:06:57.508 "method": "bdev_wait_for_examine" 00:06:57.508 } 00:06:57.508 ] 00:06:57.508 } 00:06:57.508 ] 00:06:57.508 } 00:06:57.508 [2024-12-11 08:41:05.200785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.508 [2024-12-11 08:41:05.235223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.508 [2024-12-11 08:41:05.265789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.880  [2024-12-11T08:41:07.587Z] Copying: 175/512 [MB] (175 MBps) [2024-12-11T08:41:08.522Z] Copying: 335/512 [MB] (159 MBps) [2024-12-11T08:41:08.522Z] Copying: 496/512 [MB] (161 MBps) [2024-12-11T08:41:08.781Z] Copying: 512/512 [MB] (average 165 MBps) 00:07:01.007 00:07:01.007 08:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:01.007 08:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ wz4tdrs9zblmj9a4xtk4t6pnr7rbdds1j59em1ubg1fgc50ctai39csqtp9dxmdgl7jyg99mq4kcjvuheo1nf3nvj8668qma5wjm559spkofrw72n2gsbc6it0t31cwyhyna0hm32be1cza9tkl4gw1psqul40lj985gfdl5izdobx91nkq893tojqp4wj9oecm2ci5yyzps0nvpwb9l4mn6o524n6fucp0o110j5d0kjasut9isbmfsievk4tm95aprn18cov8qb5ufp1qo2t7gc2ege5thsfqo8cvdxfaeveg16e29j1fqgal9qo5awhxzjnfx1z6rmbnrzzagw0xo0i85te9br3yx1qzlatcjq4uwr9hjr04b9xxj2txsv86lr9mvllqnr19fm4eeiwdcql40bztresamyuwkda5w6acp7y28g510hy3uv6m78fwy5amzknne2nzzc3ph5uqylypxtrcludb8cohc11rxrabbh50j2yuv54eou86a5yjvd1jt0suvacain01jr7t94n2xwcyycnvc66g88p0oh1m77ap56s9kbquxfu5k7zpivq0g8j5n7pfxj983z6jh3g3kp1geepejzj736zmlfsonqj4slktjnc967rtvioq7e6yhx7cc4kbdeo5pdnr2pyo81k90cbrlxjwv7lbe8056sggz7kao2mac1r91oe7gwjsga23xkn289p1zyapa3dn4hh4t6tknbtkqiyvju8derj1d8a3m1ennhcq4vqkh93dbx5k4lips5nq3kipmugunf660n2yrpzdry4gaeqperokaxg094v9mmnru8fhgliu48zer5aok4iw8f1xxmfd7inytlwdy01yrjnvl8390aqzkjwyqm41n6fxjzvnft4vksnctz2bugx5hp4hh90ag5s2z8lpt40pkhc9g64xxckv8xh0vi4xu5q03jdvrgyztvmheb2bqho3m6ikkghfbnbeknc4lkgpb13s6w05t3wm2x5xr5672v6td == \w\z\4\t\d\r\s\9\z\b\l\m\j\9\a\4\x\t\k\4\t\6\p\n\r\7\r\b\d\d\s\1\j\5\9\e\m\1\u\b\g\1\f\g\c\5\0\c\t\a\i\3\9\c\s\q\t\p\9\d\x\m\d\g\l\7\j\y\g\9\9\m\q\4\k\c\j\v\u\h\e\o\1\n\f\3\n\v\j\8\6\6\8\q\m\a\5\w\j\m\5\5\9\s\p\k\o\f\r\w\7\2\n\2\g\s\b\c\6\i\t\0\t\3\1\c\w\y\h\y\n\a\0\h\m\3\2\b\e\1\c\z\a\9\t\k\l\4\g\w\1\p\s\q\u\l\4\0\l\j\9\8\5\g\f\d\l\5\i\z\d\o\b\x\9\1\n\k\q\8\9\3\t\o\j\q\p\4\w\j\9\o\e\c\m\2\c\i\5\y\y\z\p\s\0\n\v\p\w\b\9\l\4\m\n\6\o\5\2\4\n\6\f\u\c\p\0\o\1\1\0\j\5\d\0\k\j\a\s\u\t\9\i\s\b\m\f\s\i\e\v\k\4\t\m\9\5\a\p\r\n\1\8\c\o\v\8\q\b\5\u\f\p\1\q\o\2\t\7\g\c\2\e\g\e\5\t\h\s\f\q\o\8\c\v\d\x\f\a\e\v\e\g\1\6\e\2\9\j\1\f\q\g\a\l\9\q\o\5\a\w\h\x\z\j\n\f\x\1\z\6\r\m\b\n\r\z\z\a\g\w\0\x\o\0\i\8\5\t\e\9\b\r\3\y\x\1\q\z\l\a\t\c\j\q\4\u\w\r\9\h\j\r\0\4\b\9\x\x\j\2\t\x\s\v\8\6\l\r\9\m\v\l\l\q\n\r\1\9\f\m\4\e\e\i\w\d\c\q\l\4\0\b\z\t\r\e\s\a\m\y\u\w\k\d\a\5\w\6\a\c\p\7\y\2\8\g\5\1\0\h\y\3\u\v\6\m\7\8\f\w\y\5\a\m\z\k\n\n\e\2\n\z\z\c\3\p\h\5\u\q\y\l\y\p\x\t\r\c\l\u\d\b\8\c\o\h\c\1\1\r\x\r\a\b\b\h\5\0\j\2\y\u\v\5\4\e\o\u\8\6\a\5\y\j\v\d\1\j\t\0\s\u\v\a\c\a\i\n\0\1\j\r\7\t\9\4\n\2\x\w\c\y\y\c\n\v\c\6\6\g\8\8\p\0\o\h\1\m\7\7\a\p\5\6\s\9\k\b\q\u\x\f\u\5\k\7\z\p\i\v\q\0\g\8\j\5\n\7\p\f\x\j\9\8\3\z\6\j\h\3\g\3\k\p\1\g\e\e\p\e\j\z\j\7\3\6\z\m\l\f\s\o\n\q\j\4\s\l\k\t\j\n\c\9\6\7\r\t\v\i\o\q\7\e\6\y\h\x\7\c\c\4\k\b\d\e\o\5\p\d\n\r\2\p\y\o\8\1\k\9\0\c\b\r\l\x\j\w\v\7\l\b\e\8\0\5\6\s\g\g\z\7\k\a\o\2\m\a\c\1\r\9\1\o\e\7\g\w\j\s\g\a\2\3\x\k\n\2\8\9\p\1\z\y\a\p\a\3\d\n\4\h\h\4\t\6\t\k\n\b\t\k\q\i\y\v\j\u\8\d\e\r\j\1\d\8\a\3\m\1\e\n\n\h\c\q\4\v\q\k\h\9\3\d\b\x\5\k\4\l\i\p\s\5\n\q\3\k\i\p\m\u\g\u\n\f\6\6\0\n\2\y\r\p\z\d\r\y\4\g\a\e\q\p\e\r\o\k\a\x\g\0\9\4\v\9\m\m\n\r\u\8\f\h\g\l\i\u\4\8\z\e\r\5\a\o\k\4\i\w\8\f\1\x\x\m\f\d\7\i\n\y\t\l\w\d\y\0\1\y\r\j\n\v\l\8\3\9\0\a\q\z\k\j\w\y\q\m\4\1\n\6\f\x\j\z\v\n\f\t\4\v\k\s\n\c\t\z\2\b\u\g\x\5\h\p\4\h\h\9\0\a\g\5\s\2\z\8\l\p\t\4\0\p\k\h\c\9\g\6\4\x\x\c\k\v\8\x\h\0\v\i\4\x\u\5\q\0\3\j\d\v\r\g\y\z\t\v\m\h\e\b\2\b\q\h\o\3\m\6\i\k\k\g\h\f\b\n\b\e\k\n\c\4\l\k\g\p\b\1\3\s\6\w\0\5\t\3\w\m\2\x\5\x\r\5\6\7\2\v\6\t\d ]] 00:07:01.007 08:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:01.008 08:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ wz4tdrs9zblmj9a4xtk4t6pnr7rbdds1j59em1ubg1fgc50ctai39csqtp9dxmdgl7jyg99mq4kcjvuheo1nf3nvj8668qma5wjm559spkofrw72n2gsbc6it0t31cwyhyna0hm32be1cza9tkl4gw1psqul40lj985gfdl5izdobx91nkq893tojqp4wj9oecm2ci5yyzps0nvpwb9l4mn6o524n6fucp0o110j5d0kjasut9isbmfsievk4tm95aprn18cov8qb5ufp1qo2t7gc2ege5thsfqo8cvdxfaeveg16e29j1fqgal9qo5awhxzjnfx1z6rmbnrzzagw0xo0i85te9br3yx1qzlatcjq4uwr9hjr04b9xxj2txsv86lr9mvllqnr19fm4eeiwdcql40bztresamyuwkda5w6acp7y28g510hy3uv6m78fwy5amzknne2nzzc3ph5uqylypxtrcludb8cohc11rxrabbh50j2yuv54eou86a5yjvd1jt0suvacain01jr7t94n2xwcyycnvc66g88p0oh1m77ap56s9kbquxfu5k7zpivq0g8j5n7pfxj983z6jh3g3kp1geepejzj736zmlfsonqj4slktjnc967rtvioq7e6yhx7cc4kbdeo5pdnr2pyo81k90cbrlxjwv7lbe8056sggz7kao2mac1r91oe7gwjsga23xkn289p1zyapa3dn4hh4t6tknbtkqiyvju8derj1d8a3m1ennhcq4vqkh93dbx5k4lips5nq3kipmugunf660n2yrpzdry4gaeqperokaxg094v9mmnru8fhgliu48zer5aok4iw8f1xxmfd7inytlwdy01yrjnvl8390aqzkjwyqm41n6fxjzvnft4vksnctz2bugx5hp4hh90ag5s2z8lpt40pkhc9g64xxckv8xh0vi4xu5q03jdvrgyztvmheb2bqho3m6ikkghfbnbeknc4lkgpb13s6w05t3wm2x5xr5672v6td == \w\z\4\t\d\r\s\9\z\b\l\m\j\9\a\4\x\t\k\4\t\6\p\n\r\7\r\b\d\d\s\1\j\5\9\e\m\1\u\b\g\1\f\g\c\5\0\c\t\a\i\3\9\c\s\q\t\p\9\d\x\m\d\g\l\7\j\y\g\9\9\m\q\4\k\c\j\v\u\h\e\o\1\n\f\3\n\v\j\8\6\6\8\q\m\a\5\w\j\m\5\5\9\s\p\k\o\f\r\w\7\2\n\2\g\s\b\c\6\i\t\0\t\3\1\c\w\y\h\y\n\a\0\h\m\3\2\b\e\1\c\z\a\9\t\k\l\4\g\w\1\p\s\q\u\l\4\0\l\j\9\8\5\g\f\d\l\5\i\z\d\o\b\x\9\1\n\k\q\8\9\3\t\o\j\q\p\4\w\j\9\o\e\c\m\2\c\i\5\y\y\z\p\s\0\n\v\p\w\b\9\l\4\m\n\6\o\5\2\4\n\6\f\u\c\p\0\o\1\1\0\j\5\d\0\k\j\a\s\u\t\9\i\s\b\m\f\s\i\e\v\k\4\t\m\9\5\a\p\r\n\1\8\c\o\v\8\q\b\5\u\f\p\1\q\o\2\t\7\g\c\2\e\g\e\5\t\h\s\f\q\o\8\c\v\d\x\f\a\e\v\e\g\1\6\e\2\9\j\1\f\q\g\a\l\9\q\o\5\a\w\h\x\z\j\n\f\x\1\z\6\r\m\b\n\r\z\z\a\g\w\0\x\o\0\i\8\5\t\e\9\b\r\3\y\x\1\q\z\l\a\t\c\j\q\4\u\w\r\9\h\j\r\0\4\b\9\x\x\j\2\t\x\s\v\8\6\l\r\9\m\v\l\l\q\n\r\1\9\f\m\4\e\e\i\w\d\c\q\l\4\0\b\z\t\r\e\s\a\m\y\u\w\k\d\a\5\w\6\a\c\p\7\y\2\8\g\5\1\0\h\y\3\u\v\6\m\7\8\f\w\y\5\a\m\z\k\n\n\e\2\n\z\z\c\3\p\h\5\u\q\y\l\y\p\x\t\r\c\l\u\d\b\8\c\o\h\c\1\1\r\x\r\a\b\b\h\5\0\j\2\y\u\v\5\4\e\o\u\8\6\a\5\y\j\v\d\1\j\t\0\s\u\v\a\c\a\i\n\0\1\j\r\7\t\9\4\n\2\x\w\c\y\y\c\n\v\c\6\6\g\8\8\p\0\o\h\1\m\7\7\a\p\5\6\s\9\k\b\q\u\x\f\u\5\k\7\z\p\i\v\q\0\g\8\j\5\n\7\p\f\x\j\9\8\3\z\6\j\h\3\g\3\k\p\1\g\e\e\p\e\j\z\j\7\3\6\z\m\l\f\s\o\n\q\j\4\s\l\k\t\j\n\c\9\6\7\r\t\v\i\o\q\7\e\6\y\h\x\7\c\c\4\k\b\d\e\o\5\p\d\n\r\2\p\y\o\8\1\k\9\0\c\b\r\l\x\j\w\v\7\l\b\e\8\0\5\6\s\g\g\z\7\k\a\o\2\m\a\c\1\r\9\1\o\e\7\g\w\j\s\g\a\2\3\x\k\n\2\8\9\p\1\z\y\a\p\a\3\d\n\4\h\h\4\t\6\t\k\n\b\t\k\q\i\y\v\j\u\8\d\e\r\j\1\d\8\a\3\m\1\e\n\n\h\c\q\4\v\q\k\h\9\3\d\b\x\5\k\4\l\i\p\s\5\n\q\3\k\i\p\m\u\g\u\n\f\6\6\0\n\2\y\r\p\z\d\r\y\4\g\a\e\q\p\e\r\o\k\a\x\g\0\9\4\v\9\m\m\n\r\u\8\f\h\g\l\i\u\4\8\z\e\r\5\a\o\k\4\i\w\8\f\1\x\x\m\f\d\7\i\n\y\t\l\w\d\y\0\1\y\r\j\n\v\l\8\3\9\0\a\q\z\k\j\w\y\q\m\4\1\n\6\f\x\j\z\v\n\f\t\4\v\k\s\n\c\t\z\2\b\u\g\x\5\h\p\4\h\h\9\0\a\g\5\s\2\z\8\l\p\t\4\0\p\k\h\c\9\g\6\4\x\x\c\k\v\8\x\h\0\v\i\4\x\u\5\q\0\3\j\d\v\r\g\y\z\t\v\m\h\e\b\2\b\q\h\o\3\m\6\i\k\k\g\h\f\b\n\b\e\k\n\c\4\l\k\g\p\b\1\3\s\6\w\0\5\t\3\w\m\2\x\5\x\r\5\6\7\2\v\6\t\d ]] 00:07:01.008 08:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:01.576 08:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:01.576 08:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:01.576 08:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.576 08:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.576 [2024-12-11 08:41:09.130626] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:01.576 [2024-12-11 08:41:09.130732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:07:01.576 { 00:07:01.576 "subsystems": [ 00:07:01.576 { 00:07:01.576 "subsystem": "bdev", 00:07:01.576 "config": [ 00:07:01.576 { 00:07:01.576 "params": { 00:07:01.576 "block_size": 512, 00:07:01.576 "num_blocks": 1048576, 00:07:01.576 "name": "malloc0" 00:07:01.576 }, 00:07:01.576 "method": "bdev_malloc_create" 00:07:01.576 }, 00:07:01.576 { 00:07:01.576 "params": { 00:07:01.576 "filename": "/dev/zram1", 00:07:01.576 "name": "uring0" 00:07:01.576 }, 00:07:01.576 "method": "bdev_uring_create" 00:07:01.576 }, 00:07:01.576 { 00:07:01.577 "method": "bdev_wait_for_examine" 00:07:01.577 } 00:07:01.577 ] 00:07:01.577 } 00:07:01.577 ] 00:07:01.577 } 00:07:01.577 [2024-12-11 08:41:09.275901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.577 [2024-12-11 08:41:09.309227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.577 [2024-12-11 08:41:09.338380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.969  [2024-12-11T08:41:11.689Z] Copying: 159/512 [MB] (159 MBps) [2024-12-11T08:41:12.624Z] Copying: 318/512 [MB] (159 MBps) [2024-12-11T08:41:12.883Z] Copying: 480/512 [MB] (161 MBps) [2024-12-11T08:41:12.883Z] Copying: 512/512 [MB] (average 160 MBps) 00:07:05.109 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:05.109 08:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.369 [2024-12-11 08:41:12.927042] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:05.369 [2024-12-11 08:41:12.927183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:07:05.369 { 00:07:05.369 "subsystems": [ 00:07:05.369 { 00:07:05.369 "subsystem": "bdev", 00:07:05.369 "config": [ 00:07:05.369 { 00:07:05.369 "params": { 00:07:05.369 "block_size": 512, 00:07:05.369 "num_blocks": 1048576, 00:07:05.369 "name": "malloc0" 00:07:05.369 }, 00:07:05.369 "method": "bdev_malloc_create" 00:07:05.369 }, 00:07:05.369 { 00:07:05.369 "params": { 00:07:05.369 "filename": "/dev/zram1", 00:07:05.369 "name": "uring0" 00:07:05.369 }, 00:07:05.369 "method": "bdev_uring_create" 00:07:05.369 }, 00:07:05.369 { 00:07:05.369 "params": { 00:07:05.369 "name": "uring0" 00:07:05.369 }, 00:07:05.369 "method": "bdev_uring_delete" 00:07:05.369 }, 00:07:05.369 { 00:07:05.369 "method": "bdev_wait_for_examine" 00:07:05.369 } 00:07:05.369 ] 00:07:05.369 } 00:07:05.369 ] 00:07:05.369 } 00:07:05.369 [2024-12-11 08:41:13.076464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.369 [2024-12-11 08:41:13.109240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.369 [2024-12-11 08:41:13.136859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.628  [2024-12-11T08:41:13.662Z] Copying: 0/0 [B] (average 0 Bps) 00:07:05.888 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.888 08:41:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:05.888 { 00:07:05.888 "subsystems": [ 00:07:05.888 { 00:07:05.888 "subsystem": "bdev", 00:07:05.888 "config": [ 00:07:05.888 { 00:07:05.888 "params": { 00:07:05.888 "block_size": 512, 00:07:05.888 "num_blocks": 1048576, 00:07:05.888 "name": "malloc0" 00:07:05.888 }, 00:07:05.888 "method": "bdev_malloc_create" 00:07:05.888 }, 00:07:05.888 { 00:07:05.888 "params": { 00:07:05.888 "filename": "/dev/zram1", 00:07:05.888 "name": "uring0" 00:07:05.888 }, 00:07:05.888 "method": "bdev_uring_create" 00:07:05.888 }, 00:07:05.888 { 00:07:05.888 "params": { 00:07:05.888 "name": "uring0" 00:07:05.888 }, 00:07:05.888 "method": "bdev_uring_delete" 00:07:05.888 }, 00:07:05.888 { 00:07:05.888 "method": "bdev_wait_for_examine" 00:07:05.888 } 00:07:05.888 ] 00:07:05.888 } 00:07:05.888 ] 00:07:05.888 } 00:07:05.888 [2024-12-11 08:41:13.550688] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:05.888 [2024-12-11 08:41:13.550826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62277 ] 00:07:06.148 [2024-12-11 08:41:13.698629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.148 [2024-12-11 08:41:13.732042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.148 [2024-12-11 08:41:13.762272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.148 [2024-12-11 08:41:13.888242] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:06.148 [2024-12-11 08:41:13.888296] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:06.148 [2024-12-11 08:41:13.888308] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:07:06.148 [2024-12-11 08:41:13.888319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.407 [2024-12-11 08:41:14.048838] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:06.407 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:06.666 00:07:06.666 real 0m13.304s 00:07:06.666 user 0m9.124s 00:07:06.666 sys 0m11.632s 00:07:06.666 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.666 ************************************ 00:07:06.666 END TEST dd_uring_copy 00:07:06.666 ************************************ 00:07:06.666 08:41:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 00:07:06.666 real 0m13.564s 00:07:06.666 user 0m9.290s 00:07:06.666 sys 0m11.730s 00:07:06.666 08:41:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.666 08:41:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 ************************************ 00:07:06.666 END TEST spdk_dd_uring 00:07:06.666 ************************************ 00:07:06.666 08:41:14 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:06.666 08:41:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.666 08:41:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.666 08:41:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:06.925 ************************************ 00:07:06.925 START TEST spdk_dd_sparse 00:07:06.925 ************************************ 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:06.925 * Looking for test storage... 00:07:06.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:06.925 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.926 --rc genhtml_branch_coverage=1 00:07:06.926 --rc genhtml_function_coverage=1 00:07:06.926 --rc genhtml_legend=1 00:07:06.926 --rc geninfo_all_blocks=1 00:07:06.926 --rc geninfo_unexecuted_blocks=1 00:07:06.926 00:07:06.926 ' 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.926 --rc genhtml_branch_coverage=1 00:07:06.926 --rc genhtml_function_coverage=1 00:07:06.926 --rc genhtml_legend=1 00:07:06.926 --rc geninfo_all_blocks=1 00:07:06.926 --rc geninfo_unexecuted_blocks=1 00:07:06.926 00:07:06.926 ' 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.926 --rc genhtml_branch_coverage=1 00:07:06.926 --rc genhtml_function_coverage=1 00:07:06.926 --rc genhtml_legend=1 00:07:06.926 --rc geninfo_all_blocks=1 00:07:06.926 --rc geninfo_unexecuted_blocks=1 00:07:06.926 00:07:06.926 ' 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.926 --rc genhtml_branch_coverage=1 00:07:06.926 --rc genhtml_function_coverage=1 00:07:06.926 --rc genhtml_legend=1 00:07:06.926 --rc geninfo_all_blocks=1 00:07:06.926 --rc geninfo_unexecuted_blocks=1 00:07:06.926 00:07:06.926 ' 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:06.926 1+0 records in 00:07:06.926 1+0 records out 00:07:06.926 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00582041 s, 721 MB/s 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:06.926 1+0 records in 00:07:06.926 1+0 records out 00:07:06.926 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00423661 s, 990 MB/s 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:06.926 1+0 records in 00:07:06.926 1+0 records out 00:07:06.926 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00396269 s, 1.1 GB/s 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:06.926 ************************************ 00:07:06.926 START TEST dd_sparse_file_to_file 00:07:06.926 ************************************ 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:06.926 08:41:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:07.185 [2024-12-11 08:41:14.731188] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:07.185 [2024-12-11 08:41:14.731333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:07:07.185 { 00:07:07.185 "subsystems": [ 00:07:07.185 { 00:07:07.185 "subsystem": "bdev", 00:07:07.185 "config": [ 00:07:07.185 { 00:07:07.185 "params": { 00:07:07.185 "block_size": 4096, 00:07:07.185 "filename": "dd_sparse_aio_disk", 00:07:07.185 "name": "dd_aio" 00:07:07.185 }, 00:07:07.185 "method": "bdev_aio_create" 00:07:07.185 }, 00:07:07.185 { 00:07:07.185 "params": { 00:07:07.185 "lvs_name": "dd_lvstore", 00:07:07.185 "bdev_name": "dd_aio" 00:07:07.185 }, 00:07:07.185 "method": "bdev_lvol_create_lvstore" 00:07:07.185 }, 00:07:07.185 { 00:07:07.185 "method": "bdev_wait_for_examine" 00:07:07.185 } 00:07:07.185 ] 00:07:07.185 } 00:07:07.185 ] 00:07:07.185 } 00:07:07.185 [2024-12-11 08:41:14.878131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.185 [2024-12-11 08:41:14.908861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.185 [2024-12-11 08:41:14.937237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.444  [2024-12-11T08:41:15.218Z] Copying: 12/36 [MB] (average 1090 MBps) 00:07:07.444 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:07.444 00:07:07.444 real 0m0.497s 00:07:07.444 user 0m0.308s 00:07:07.444 sys 0m0.237s 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.444 ************************************ 00:07:07.444 END TEST dd_sparse_file_to_file 00:07:07.444 ************************************ 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:07.444 ************************************ 00:07:07.444 START TEST dd_sparse_file_to_bdev 00:07:07.444 ************************************ 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:07.444 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:07.702 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:07.702 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.702 [2024-12-11 08:41:15.269308] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:07.702 [2024-12-11 08:41:15.269429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62414 ] 00:07:07.702 { 00:07:07.702 "subsystems": [ 00:07:07.702 { 00:07:07.702 "subsystem": "bdev", 00:07:07.702 "config": [ 00:07:07.702 { 00:07:07.702 "params": { 00:07:07.702 "block_size": 4096, 00:07:07.702 "filename": "dd_sparse_aio_disk", 00:07:07.702 "name": "dd_aio" 00:07:07.702 }, 00:07:07.702 "method": "bdev_aio_create" 00:07:07.702 }, 00:07:07.702 { 00:07:07.702 "params": { 00:07:07.703 "lvs_name": "dd_lvstore", 00:07:07.703 "lvol_name": "dd_lvol", 00:07:07.703 "size_in_mib": 36, 00:07:07.703 "thin_provision": true 00:07:07.703 }, 00:07:07.703 "method": "bdev_lvol_create" 00:07:07.703 }, 00:07:07.703 { 00:07:07.703 "method": "bdev_wait_for_examine" 00:07:07.703 } 00:07:07.703 ] 00:07:07.703 } 00:07:07.703 ] 00:07:07.703 } 00:07:07.703 [2024-12-11 08:41:15.414977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.703 [2024-12-11 08:41:15.444574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.703 [2024-12-11 08:41:15.472660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.962  [2024-12-11T08:41:15.736Z] Copying: 12/36 [MB] (average 571 MBps) 00:07:07.962 00:07:07.962 00:07:07.962 real 0m0.474s 00:07:07.962 user 0m0.310s 00:07:07.962 sys 0m0.234s 00:07:07.962 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.962 ************************************ 00:07:07.962 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.962 END TEST dd_sparse_file_to_bdev 00:07:07.962 ************************************ 00:07:07.962 08:41:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:07.962 08:41:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.962 08:41:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.962 08:41:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:08.221 ************************************ 00:07:08.221 START TEST dd_sparse_bdev_to_file 00:07:08.221 ************************************ 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:08.221 08:41:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:08.221 { 00:07:08.221 "subsystems": [ 00:07:08.221 { 00:07:08.221 "subsystem": "bdev", 00:07:08.221 "config": [ 00:07:08.221 { 00:07:08.221 "params": { 00:07:08.221 "block_size": 4096, 00:07:08.221 "filename": "dd_sparse_aio_disk", 00:07:08.221 "name": "dd_aio" 00:07:08.221 }, 00:07:08.221 "method": "bdev_aio_create" 00:07:08.221 }, 00:07:08.221 { 00:07:08.221 "method": "bdev_wait_for_examine" 00:07:08.221 } 00:07:08.221 ] 00:07:08.221 } 00:07:08.221 ] 00:07:08.221 } 00:07:08.221 [2024-12-11 08:41:15.802258] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:08.221 [2024-12-11 08:41:15.802367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62447 ] 00:07:08.221 [2024-12-11 08:41:15.949520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.221 [2024-12-11 08:41:15.979037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.480 [2024-12-11 08:41:16.008090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.480  [2024-12-11T08:41:16.254Z] Copying: 12/36 [MB] (average 1090 MBps) 00:07:08.480 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:08.480 00:07:08.480 real 0m0.489s 00:07:08.480 user 0m0.302s 00:07:08.480 sys 0m0.238s 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.480 08:41:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:08.480 ************************************ 00:07:08.480 END TEST dd_sparse_bdev_to_file 00:07:08.480 ************************************ 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:08.739 00:07:08.739 real 0m1.846s 00:07:08.739 user 0m1.096s 00:07:08.739 sys 0m0.921s 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.739 08:41:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:08.739 ************************************ 00:07:08.739 END TEST spdk_dd_sparse 00:07:08.739 ************************************ 00:07:08.739 08:41:16 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:08.739 08:41:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.739 08:41:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.739 08:41:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:08.739 ************************************ 00:07:08.739 START TEST spdk_dd_negative 00:07:08.739 ************************************ 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:08.739 * Looking for test storage... 00:07:08.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:08.739 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.999 --rc genhtml_branch_coverage=1 00:07:08.999 --rc genhtml_function_coverage=1 00:07:08.999 --rc genhtml_legend=1 00:07:08.999 --rc geninfo_all_blocks=1 00:07:08.999 --rc geninfo_unexecuted_blocks=1 00:07:08.999 00:07:08.999 ' 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.999 --rc genhtml_branch_coverage=1 00:07:08.999 --rc genhtml_function_coverage=1 00:07:08.999 --rc genhtml_legend=1 00:07:08.999 --rc geninfo_all_blocks=1 00:07:08.999 --rc geninfo_unexecuted_blocks=1 00:07:08.999 00:07:08.999 ' 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.999 --rc genhtml_branch_coverage=1 00:07:08.999 --rc genhtml_function_coverage=1 00:07:08.999 --rc genhtml_legend=1 00:07:08.999 --rc geninfo_all_blocks=1 00:07:08.999 --rc geninfo_unexecuted_blocks=1 00:07:08.999 00:07:08.999 ' 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:08.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.999 --rc genhtml_branch_coverage=1 00:07:08.999 --rc genhtml_function_coverage=1 00:07:08.999 --rc genhtml_legend=1 00:07:08.999 --rc geninfo_all_blocks=1 00:07:08.999 --rc geninfo_unexecuted_blocks=1 00:07:08.999 00:07:08.999 ' 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.999 ************************************ 00:07:08.999 START TEST dd_invalid_arguments 00:07:08.999 ************************************ 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.999 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:08.999 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:08.999 00:07:08.999 CPU options: 00:07:08.999 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:08.999 (like [0,1,10]) 00:07:08.999 --lcores lcore to CPU mapping list. The list is in the format: 00:07:08.999 [<,lcores[@CPUs]>...] 00:07:08.999 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:08.999 Within the group, '-' is used for range separator, 00:07:08.999 ',' is used for single number separator. 00:07:08.999 '( )' can be omitted for single element group, 00:07:08.999 '@' can be omitted if cpus and lcores have the same value 00:07:08.999 --disable-cpumask-locks Disable CPU core lock files. 00:07:08.999 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:08.999 pollers in the app support interrupt mode) 00:07:08.999 -p, --main-core main (primary) core for DPDK 00:07:08.999 00:07:08.999 Configuration options: 00:07:08.999 -c, --config, --json JSON config file 00:07:08.999 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:08.999 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:08.999 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:08.999 --rpcs-allowed comma-separated list of permitted RPCS 00:07:08.999 --json-ignore-init-errors don't exit on invalid config entry 00:07:08.999 00:07:08.999 Memory options: 00:07:08.999 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:08.999 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:08.999 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:08.999 -R, --huge-unlink unlink huge files after initialization 00:07:09.000 -n, --mem-channels number of memory channels used for DPDK 00:07:09.000 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:09.000 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:09.000 --no-huge run without using hugepages 00:07:09.000 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:09.000 -i, --shm-id shared memory ID (optional) 00:07:09.000 -g, --single-file-segments force creating just one hugetlbfs file 00:07:09.000 00:07:09.000 PCI options: 00:07:09.000 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:09.000 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:09.000 -u, --no-pci disable PCI access 00:07:09.000 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:09.000 00:07:09.000 Log options: 00:07:09.000 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:09.000 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:09.000 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:09.000 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:09.000 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:09.000 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:09.000 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:09.000 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:09.000 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:09.000 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:09.000 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:09.000 --silence-noticelog disable notice level logging to stderr 00:07:09.000 00:07:09.000 Trace options: 00:07:09.000 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:09.000 setting 0 to disable trace (default 32768) 00:07:09.000 Tracepoints vary in size and can use more than one trace entry. 00:07:09.000 -e, --tpoint-group [:] 00:07:09.000 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:09.000 [2024-12-11 08:41:16.597108] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:07:09.000 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:09.000 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:09.000 bdev_raid, scheduler, all). 00:07:09.000 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:09.000 a tracepoint group. First tpoint inside a group can be enabled by 00:07:09.000 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:09.000 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:09.000 in /include/spdk_internal/trace_defs.h 00:07:09.000 00:07:09.000 Other options: 00:07:09.000 -h, --help show this usage 00:07:09.000 -v, --version print SPDK version 00:07:09.000 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:09.000 --env-context Opaque context for use of the env implementation 00:07:09.000 00:07:09.000 Application specific: 00:07:09.000 [--------- DD Options ---------] 00:07:09.000 --if Input file. Must specify either --if or --ib. 00:07:09.000 --ib Input bdev. Must specifier either --if or --ib 00:07:09.000 --of Output file. Must specify either --of or --ob. 00:07:09.000 --ob Output bdev. Must specify either --of or --ob. 00:07:09.000 --iflag Input file flags. 00:07:09.000 --oflag Output file flags. 00:07:09.000 --bs I/O unit size (default: 4096) 00:07:09.000 --qd Queue depth (default: 2) 00:07:09.000 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:09.000 --skip Skip this many I/O units at start of input. (default: 0) 00:07:09.000 --seek Skip this many I/O units at start of output. (default: 0) 00:07:09.000 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:09.000 --sparse Enable hole skipping in input target 00:07:09.000 Available iflag and oflag values: 00:07:09.000 append - append mode 00:07:09.000 direct - use direct I/O for data 00:07:09.000 directory - fail unless a directory 00:07:09.000 dsync - use synchronized I/O for data 00:07:09.000 noatime - do not update access time 00:07:09.000 noctty - do not assign controlling terminal from file 00:07:09.000 nofollow - do not follow symlinks 00:07:09.000 nonblock - use non-blocking I/O 00:07:09.000 sync - use synchronized I/O for data and metadata 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.000 00:07:09.000 real 0m0.070s 00:07:09.000 user 0m0.050s 00:07:09.000 sys 0m0.019s 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.000 ************************************ 00:07:09.000 END TEST dd_invalid_arguments 00:07:09.000 ************************************ 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.000 ************************************ 00:07:09.000 START TEST dd_double_input 00:07:09.000 ************************************ 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:09.000 [2024-12-11 08:41:16.720116] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.000 00:07:09.000 real 0m0.074s 00:07:09.000 user 0m0.049s 00:07:09.000 sys 0m0.023s 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.000 ************************************ 00:07:09.000 END TEST dd_double_input 00:07:09.000 ************************************ 00:07:09.000 08:41:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.259 ************************************ 00:07:09.259 START TEST dd_double_output 00:07:09.259 ************************************ 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:09.259 [2024-12-11 08:41:16.848967] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.259 00:07:09.259 real 0m0.077s 00:07:09.259 user 0m0.044s 00:07:09.259 sys 0m0.031s 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.259 ************************************ 00:07:09.259 08:41:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:09.259 END TEST dd_double_output 00:07:09.259 ************************************ 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.260 ************************************ 00:07:09.260 START TEST dd_no_input 00:07:09.260 ************************************ 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:09.260 [2024-12-11 08:41:16.973308] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.260 00:07:09.260 real 0m0.076s 00:07:09.260 user 0m0.052s 00:07:09.260 sys 0m0.023s 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.260 08:41:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:09.260 ************************************ 00:07:09.260 END TEST dd_no_input 00:07:09.260 ************************************ 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 ************************************ 00:07:09.519 START TEST dd_no_output 00:07:09.519 ************************************ 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.519 [2024-12-11 08:41:17.102860] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.519 00:07:09.519 real 0m0.079s 00:07:09.519 user 0m0.052s 00:07:09.519 sys 0m0.026s 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.519 ************************************ 00:07:09.519 END TEST dd_no_output 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 ************************************ 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 ************************************ 00:07:09.519 START TEST dd_wrong_blocksize 00:07:09.519 ************************************ 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:09.519 [2024-12-11 08:41:17.233598] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.519 00:07:09.519 real 0m0.079s 00:07:09.519 user 0m0.045s 00:07:09.519 sys 0m0.033s 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.519 ************************************ 00:07:09.519 END TEST dd_wrong_blocksize 00:07:09.519 ************************************ 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.519 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.778 ************************************ 00:07:09.778 START TEST dd_smaller_blocksize 00:07:09.778 ************************************ 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.778 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.779 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.779 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.779 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.779 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.779 08:41:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:09.779 [2024-12-11 08:41:17.361023] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:09.779 [2024-12-11 08:41:17.361169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:07:09.779 [2024-12-11 08:41:17.514403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.037 [2024-12-11 08:41:17.553188] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.037 [2024-12-11 08:41:17.585607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.296 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:10.555 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:10.555 [2024-12-11 08:41:18.091123] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:10.555 [2024-12-11 08:41:18.091235] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.555 [2024-12-11 08:41:18.155758] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.555 00:07:10.555 real 0m0.909s 00:07:10.555 user 0m0.337s 00:07:10.555 sys 0m0.465s 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.555 ************************************ 00:07:10.555 END TEST dd_smaller_blocksize 00:07:10.555 ************************************ 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:10.555 ************************************ 00:07:10.555 START TEST dd_invalid_count 00:07:10.555 ************************************ 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.555 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:10.555 [2024-12-11 08:41:18.323539] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.815 00:07:10.815 real 0m0.081s 00:07:10.815 user 0m0.048s 00:07:10.815 sys 0m0.032s 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.815 ************************************ 00:07:10.815 END TEST dd_invalid_count 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:10.815 ************************************ 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:10.815 ************************************ 00:07:10.815 START TEST dd_invalid_oflag 00:07:10.815 ************************************ 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:10.815 [2024-12-11 08:41:18.447042] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.815 00:07:10.815 real 0m0.078s 00:07:10.815 user 0m0.044s 00:07:10.815 sys 0m0.031s 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.815 ************************************ 00:07:10.815 END TEST dd_invalid_oflag 00:07:10.815 ************************************ 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:10.815 ************************************ 00:07:10.815 START TEST dd_invalid_iflag 00:07:10.815 ************************************ 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.815 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:10.815 [2024-12-11 08:41:18.572803] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.075 00:07:11.075 real 0m0.077s 00:07:11.075 user 0m0.056s 00:07:11.075 sys 0m0.020s 00:07:11.075 ************************************ 00:07:11.075 END TEST dd_invalid_iflag 00:07:11.075 ************************************ 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:11.075 ************************************ 00:07:11.075 START TEST dd_unknown_flag 00:07:11.075 ************************************ 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.075 08:41:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:11.075 [2024-12-11 08:41:18.704532] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:11.075 [2024-12-11 08:41:18.704622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:07:11.335 [2024-12-11 08:41:18.853586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.335 [2024-12-11 08:41:18.883460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.335 [2024-12-11 08:41:18.910652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.335 [2024-12-11 08:41:18.928889] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:11.335 [2024-12-11 08:41:18.929217] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.335 [2024-12-11 08:41:18.929321] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:11.335 [2024-12-11 08:41:18.929438] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.335 [2024-12-11 08:41:18.929707] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:11.335 [2024-12-11 08:41:18.929827] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.335 [2024-12-11 08:41:18.929995] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:11.335 [2024-12-11 08:41:18.930129] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:11.335 [2024-12-11 08:41:18.990751] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:11.335 ************************************ 00:07:11.335 END TEST dd_unknown_flag 00:07:11.335 ************************************ 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.335 00:07:11.335 real 0m0.408s 00:07:11.335 user 0m0.206s 00:07:11.335 sys 0m0.102s 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:11.335 ************************************ 00:07:11.335 START TEST dd_invalid_json 00:07:11.335 ************************************ 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.335 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:11.594 [2024-12-11 08:41:19.157767] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:11.594 [2024-12-11 08:41:19.157858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62794 ] 00:07:11.594 [2024-12-11 08:41:19.300298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.594 [2024-12-11 08:41:19.330043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.594 [2024-12-11 08:41:19.330131] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:11.594 [2024-12-11 08:41:19.330145] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:11.594 [2024-12-11 08:41:19.330189] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.594 [2024-12-11 08:41:19.330227] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.853 00:07:11.853 real 0m0.294s 00:07:11.853 user 0m0.127s 00:07:11.853 sys 0m0.063s 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.853 ************************************ 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.853 END TEST dd_invalid_json 00:07:11.853 ************************************ 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:11.853 ************************************ 00:07:11.853 START TEST dd_invalid_seek 00:07:11.853 ************************************ 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.853 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:11.853 [2024-12-11 08:41:19.521734] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:11.853 [2024-12-11 08:41:19.522404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62823 ] 00:07:11.853 { 00:07:11.853 "subsystems": [ 00:07:11.853 { 00:07:11.854 "subsystem": "bdev", 00:07:11.854 "config": [ 00:07:11.854 { 00:07:11.854 "params": { 00:07:11.854 "block_size": 512, 00:07:11.854 "num_blocks": 512, 00:07:11.854 "name": "malloc0" 00:07:11.854 }, 00:07:11.854 "method": "bdev_malloc_create" 00:07:11.854 }, 00:07:11.854 { 00:07:11.854 "params": { 00:07:11.854 "block_size": 512, 00:07:11.854 "num_blocks": 512, 00:07:11.854 "name": "malloc1" 00:07:11.854 }, 00:07:11.854 "method": "bdev_malloc_create" 00:07:11.854 }, 00:07:11.854 { 00:07:11.854 "method": "bdev_wait_for_examine" 00:07:11.854 } 00:07:11.854 ] 00:07:11.854 } 00:07:11.854 ] 00:07:11.854 } 00:07:12.113 [2024-12-11 08:41:19.670015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.113 [2024-12-11 08:41:19.703080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.113 [2024-12-11 08:41:19.733651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.113 [2024-12-11 08:41:19.778002] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:12.113 [2024-12-11 08:41:19.778079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.113 [2024-12-11 08:41:19.843351] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:12.371 ************************************ 00:07:12.371 END TEST dd_invalid_seek 00:07:12.371 ************************************ 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.371 00:07:12.371 real 0m0.463s 00:07:12.371 user 0m0.323s 00:07:12.371 sys 0m0.121s 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:12.371 ************************************ 00:07:12.371 START TEST dd_invalid_skip 00:07:12.371 ************************************ 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:12.371 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:12.372 08:41:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:12.372 { 00:07:12.372 "subsystems": [ 00:07:12.372 { 00:07:12.372 "subsystem": "bdev", 00:07:12.372 "config": [ 00:07:12.372 { 00:07:12.372 "params": { 00:07:12.372 "block_size": 512, 00:07:12.372 "num_blocks": 512, 00:07:12.372 "name": "malloc0" 00:07:12.372 }, 00:07:12.372 "method": "bdev_malloc_create" 00:07:12.372 }, 00:07:12.372 { 00:07:12.372 "params": { 00:07:12.372 "block_size": 512, 00:07:12.372 "num_blocks": 512, 00:07:12.372 "name": "malloc1" 00:07:12.372 }, 00:07:12.372 "method": "bdev_malloc_create" 00:07:12.372 }, 00:07:12.372 { 00:07:12.372 "method": "bdev_wait_for_examine" 00:07:12.372 } 00:07:12.372 ] 00:07:12.372 } 00:07:12.372 ] 00:07:12.372 } 00:07:12.372 [2024-12-11 08:41:20.010315] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:12.372 [2024-12-11 08:41:20.010706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62857 ] 00:07:12.631 [2024-12-11 08:41:20.154274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.631 [2024-12-11 08:41:20.196359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.631 [2024-12-11 08:41:20.229359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.631 [2024-12-11 08:41:20.279458] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:12.631 [2024-12-11 08:41:20.279873] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.631 [2024-12-11 08:41:20.345349] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.631 ************************************ 00:07:12.631 END TEST dd_invalid_skip 00:07:12.631 ************************************ 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.631 00:07:12.631 real 0m0.450s 00:07:12.631 user 0m0.283s 00:07:12.631 sys 0m0.120s 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.631 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:12.890 ************************************ 00:07:12.890 START TEST dd_invalid_input_count 00:07:12.890 ************************************ 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:12.890 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:12.890 { 00:07:12.890 "subsystems": [ 00:07:12.890 { 00:07:12.890 "subsystem": "bdev", 00:07:12.890 "config": [ 00:07:12.890 { 00:07:12.890 "params": { 00:07:12.890 "block_size": 512, 00:07:12.890 "num_blocks": 512, 00:07:12.890 "name": "malloc0" 00:07:12.890 }, 00:07:12.890 "method": "bdev_malloc_create" 00:07:12.890 }, 00:07:12.890 { 00:07:12.890 "params": { 00:07:12.890 "block_size": 512, 00:07:12.890 "num_blocks": 512, 00:07:12.890 "name": "malloc1" 00:07:12.890 }, 00:07:12.890 "method": "bdev_malloc_create" 00:07:12.890 }, 00:07:12.890 { 00:07:12.890 "method": "bdev_wait_for_examine" 00:07:12.890 } 00:07:12.890 ] 00:07:12.890 } 00:07:12.890 ] 00:07:12.890 } 00:07:12.890 [2024-12-11 08:41:20.511083] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:12.890 [2024-12-11 08:41:20.511766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62890 ] 00:07:12.890 [2024-12-11 08:41:20.655601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.150 [2024-12-11 08:41:20.686845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.150 [2024-12-11 08:41:20.715201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.150 [2024-12-11 08:41:20.762061] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:13.150 [2024-12-11 08:41:20.762135] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.150 [2024-12-11 08:41:20.828746] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.150 00:07:13.150 real 0m0.447s 00:07:13.150 user 0m0.290s 00:07:13.150 sys 0m0.112s 00:07:13.150 ************************************ 00:07:13.150 END TEST dd_invalid_input_count 00:07:13.150 ************************************ 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.150 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 ************************************ 00:07:13.410 START TEST dd_invalid_output_count 00:07:13.410 ************************************ 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.410 08:41:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:13.410 { 00:07:13.410 "subsystems": [ 00:07:13.410 { 00:07:13.410 "subsystem": "bdev", 00:07:13.410 "config": [ 00:07:13.410 { 00:07:13.410 "params": { 00:07:13.410 "block_size": 512, 00:07:13.410 "num_blocks": 512, 00:07:13.410 "name": "malloc0" 00:07:13.410 }, 00:07:13.410 "method": "bdev_malloc_create" 00:07:13.410 }, 00:07:13.410 { 00:07:13.410 "method": "bdev_wait_for_examine" 00:07:13.410 } 00:07:13.410 ] 00:07:13.410 } 00:07:13.410 ] 00:07:13.410 } 00:07:13.410 [2024-12-11 08:41:21.011395] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:13.410 [2024-12-11 08:41:21.011493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62924 ] 00:07:13.410 [2024-12-11 08:41:21.155052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.669 [2024-12-11 08:41:21.184680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.669 [2024-12-11 08:41:21.212675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.669 [2024-12-11 08:41:21.248269] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:13.669 [2024-12-11 08:41:21.248339] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.669 [2024-12-11 08:41:21.312830] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.669 00:07:13.669 real 0m0.432s 00:07:13.669 user 0m0.292s 00:07:13.669 sys 0m0.097s 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 ************************************ 00:07:13.669 END TEST dd_invalid_output_count 00:07:13.669 ************************************ 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 ************************************ 00:07:13.669 START TEST dd_bs_not_multiple 00:07:13.669 ************************************ 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:13.669 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.670 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.929 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.929 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.929 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.929 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:13.929 [2024-12-11 08:41:21.500255] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:13.929 [2024-12-11 08:41:21.500345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62950 ] 00:07:13.929 { 00:07:13.929 "subsystems": [ 00:07:13.929 { 00:07:13.929 "subsystem": "bdev", 00:07:13.929 "config": [ 00:07:13.929 { 00:07:13.929 "params": { 00:07:13.929 "block_size": 512, 00:07:13.929 "num_blocks": 512, 00:07:13.929 "name": "malloc0" 00:07:13.929 }, 00:07:13.929 "method": "bdev_malloc_create" 00:07:13.929 }, 00:07:13.929 { 00:07:13.929 "params": { 00:07:13.929 "block_size": 512, 00:07:13.929 "num_blocks": 512, 00:07:13.929 "name": "malloc1" 00:07:13.929 }, 00:07:13.929 "method": "bdev_malloc_create" 00:07:13.929 }, 00:07:13.929 { 00:07:13.929 "method": "bdev_wait_for_examine" 00:07:13.929 } 00:07:13.929 ] 00:07:13.929 } 00:07:13.929 ] 00:07:13.929 } 00:07:13.929 [2024-12-11 08:41:21.652272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.929 [2024-12-11 08:41:21.691247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.188 [2024-12-11 08:41:21.725569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.188 [2024-12-11 08:41:21.776118] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:14.188 [2024-12-11 08:41:21.776214] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.188 [2024-12-11 08:41:21.851356] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.188 ************************************ 00:07:14.188 END TEST dd_bs_not_multiple 00:07:14.188 ************************************ 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.188 00:07:14.188 real 0m0.471s 00:07:14.188 user 0m0.310s 00:07:14.188 sys 0m0.123s 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:14.188 ************************************ 00:07:14.188 END TEST spdk_dd_negative 00:07:14.188 ************************************ 00:07:14.188 00:07:14.188 real 0m5.612s 00:07:14.188 user 0m3.026s 00:07:14.188 sys 0m2.023s 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.188 08:41:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.447 ************************************ 00:07:14.447 END TEST spdk_dd 00:07:14.447 ************************************ 00:07:14.447 00:07:14.447 real 1m6.063s 00:07:14.447 user 0m42.735s 00:07:14.447 sys 0m27.465s 00:07:14.447 08:41:21 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.447 08:41:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.447 08:41:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:14.447 08:41:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.447 08:41:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.447 08:41:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:14.447 08:41:22 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:14.447 08:41:22 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:14.447 08:41:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.447 08:41:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.447 08:41:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.447 ************************************ 00:07:14.447 START TEST nvmf_tcp 00:07:14.447 ************************************ 00:07:14.447 08:41:22 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:14.447 * Looking for test storage... 00:07:14.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:14.447 08:41:22 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.447 08:41:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.447 08:41:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.707 08:41:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.707 --rc genhtml_branch_coverage=1 00:07:14.707 --rc genhtml_function_coverage=1 00:07:14.707 --rc genhtml_legend=1 00:07:14.707 --rc geninfo_all_blocks=1 00:07:14.707 --rc geninfo_unexecuted_blocks=1 00:07:14.707 00:07:14.707 ' 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.707 --rc genhtml_branch_coverage=1 00:07:14.707 --rc genhtml_function_coverage=1 00:07:14.707 --rc genhtml_legend=1 00:07:14.707 --rc geninfo_all_blocks=1 00:07:14.707 --rc geninfo_unexecuted_blocks=1 00:07:14.707 00:07:14.707 ' 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.707 --rc genhtml_branch_coverage=1 00:07:14.707 --rc genhtml_function_coverage=1 00:07:14.707 --rc genhtml_legend=1 00:07:14.707 --rc geninfo_all_blocks=1 00:07:14.707 --rc geninfo_unexecuted_blocks=1 00:07:14.707 00:07:14.707 ' 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.707 --rc genhtml_branch_coverage=1 00:07:14.707 --rc genhtml_function_coverage=1 00:07:14.707 --rc genhtml_legend=1 00:07:14.707 --rc geninfo_all_blocks=1 00:07:14.707 --rc geninfo_unexecuted_blocks=1 00:07:14.707 00:07:14.707 ' 00:07:14.707 08:41:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:14.707 08:41:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:14.707 08:41:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.707 08:41:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.707 ************************************ 00:07:14.707 START TEST nvmf_target_core 00:07:14.707 ************************************ 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:14.707 * Looking for test storage... 00:07:14.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.707 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.968 --rc genhtml_branch_coverage=1 00:07:14.968 --rc genhtml_function_coverage=1 00:07:14.968 --rc genhtml_legend=1 00:07:14.968 --rc geninfo_all_blocks=1 00:07:14.968 --rc geninfo_unexecuted_blocks=1 00:07:14.968 00:07:14.968 ' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.968 --rc genhtml_branch_coverage=1 00:07:14.968 --rc genhtml_function_coverage=1 00:07:14.968 --rc genhtml_legend=1 00:07:14.968 --rc geninfo_all_blocks=1 00:07:14.968 --rc geninfo_unexecuted_blocks=1 00:07:14.968 00:07:14.968 ' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.968 --rc genhtml_branch_coverage=1 00:07:14.968 --rc genhtml_function_coverage=1 00:07:14.968 --rc genhtml_legend=1 00:07:14.968 --rc geninfo_all_blocks=1 00:07:14.968 --rc geninfo_unexecuted_blocks=1 00:07:14.968 00:07:14.968 ' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.968 --rc genhtml_branch_coverage=1 00:07:14.968 --rc genhtml_function_coverage=1 00:07:14.968 --rc genhtml_legend=1 00:07:14.968 --rc geninfo_all_blocks=1 00:07:14.968 --rc geninfo_unexecuted_blocks=1 00:07:14.968 00:07:14.968 ' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.968 ************************************ 00:07:14.968 START TEST nvmf_host_management 00:07:14.968 ************************************ 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:14.968 * Looking for test storage... 00:07:14.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.968 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.969 --rc genhtml_branch_coverage=1 00:07:14.969 --rc genhtml_function_coverage=1 00:07:14.969 --rc genhtml_legend=1 00:07:14.969 --rc geninfo_all_blocks=1 00:07:14.969 --rc geninfo_unexecuted_blocks=1 00:07:14.969 00:07:14.969 ' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.969 --rc genhtml_branch_coverage=1 00:07:14.969 --rc genhtml_function_coverage=1 00:07:14.969 --rc genhtml_legend=1 00:07:14.969 --rc geninfo_all_blocks=1 00:07:14.969 --rc geninfo_unexecuted_blocks=1 00:07:14.969 00:07:14.969 ' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.969 --rc genhtml_branch_coverage=1 00:07:14.969 --rc genhtml_function_coverage=1 00:07:14.969 --rc genhtml_legend=1 00:07:14.969 --rc geninfo_all_blocks=1 00:07:14.969 --rc geninfo_unexecuted_blocks=1 00:07:14.969 00:07:14.969 ' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.969 --rc genhtml_branch_coverage=1 00:07:14.969 --rc genhtml_function_coverage=1 00:07:14.969 --rc genhtml_legend=1 00:07:14.969 --rc geninfo_all_blocks=1 00:07:14.969 --rc geninfo_unexecuted_blocks=1 00:07:14.969 00:07:14.969 ' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:14.969 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:15.229 Cannot find device "nvmf_init_br" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:15.229 Cannot find device "nvmf_init_br2" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:15.229 Cannot find device "nvmf_tgt_br" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.229 Cannot find device "nvmf_tgt_br2" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:15.229 Cannot find device "nvmf_init_br" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:15.229 Cannot find device "nvmf_init_br2" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:15.229 Cannot find device "nvmf_tgt_br" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:15.229 Cannot find device "nvmf_tgt_br2" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:15.229 Cannot find device "nvmf_br" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:15.229 Cannot find device "nvmf_init_if" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:15.229 Cannot find device "nvmf_init_if2" 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:15.229 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:15.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:15.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:07:15.488 00:07:15.488 --- 10.0.0.3 ping statistics --- 00:07:15.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.488 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:15.488 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:15.488 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:07:15.488 00:07:15.488 --- 10.0.0.4 ping statistics --- 00:07:15.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.488 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:15.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:15.488 00:07:15.488 --- 10.0.0.1 ping statistics --- 00:07:15.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.488 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:15.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:07:15.488 00:07:15.488 --- 10.0.0.2 ping statistics --- 00:07:15.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.488 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.488 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=63296 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 63296 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 63296 ']' 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.489 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.747 [2024-12-11 08:41:23.306962] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:15.747 [2024-12-11 08:41:23.307076] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.747 [2024-12-11 08:41:23.461206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.747 [2024-12-11 08:41:23.503664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.747 [2024-12-11 08:41:23.503980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.747 [2024-12-11 08:41:23.504191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.747 [2024-12-11 08:41:23.504409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.747 [2024-12-11 08:41:23.504454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.747 [2024-12-11 08:41:23.505509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.747 [2024-12-11 08:41:23.505608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.747 [2024-12-11 08:41:23.505752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.747 [2024-12-11 08:41:23.505759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.008 [2024-12-11 08:41:23.541346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 [2024-12-11 08:41:23.643015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 Malloc0 00:07:16.008 [2024-12-11 08:41:23.710819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=63348 00:07:16.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 63348 /var/tmp/bdevperf.sock 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 63348 ']' 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:16.008 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:16.008 { 00:07:16.008 "params": { 00:07:16.008 "name": "Nvme$subsystem", 00:07:16.009 "trtype": "$TEST_TRANSPORT", 00:07:16.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.009 "adrfam": "ipv4", 00:07:16.009 "trsvcid": "$NVMF_PORT", 00:07:16.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.009 "hdgst": ${hdgst:-false}, 00:07:16.009 "ddgst": ${ddgst:-false} 00:07:16.009 }, 00:07:16.009 "method": "bdev_nvme_attach_controller" 00:07:16.009 } 00:07:16.009 EOF 00:07:16.009 )") 00:07:16.009 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:16.009 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:16.009 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:16.009 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:16.009 "params": { 00:07:16.009 "name": "Nvme0", 00:07:16.009 "trtype": "tcp", 00:07:16.009 "traddr": "10.0.0.3", 00:07:16.009 "adrfam": "ipv4", 00:07:16.009 "trsvcid": "4420", 00:07:16.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:16.009 "hdgst": false, 00:07:16.009 "ddgst": false 00:07:16.009 }, 00:07:16.009 "method": "bdev_nvme_attach_controller" 00:07:16.009 }' 00:07:16.266 [2024-12-11 08:41:23.814401] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:16.266 [2024-12-11 08:41:23.814676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63348 ] 00:07:16.266 [2024-12-11 08:41:23.966056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.266 [2024-12-11 08:41:24.005256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.525 [2024-12-11 08:41:24.046863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.525 Running I/O for 10 seconds... 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:16.525 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:16.783 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:16.783 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.783 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.783 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.783 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.783 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.042 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.042 [2024-12-11 08:41:24.612590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.042 [2024-12-11 08:41:24.612640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.042 [2024-12-11 08:41:24.612665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.042 [2024-12-11 08:41:24.612681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.612985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.612997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.043 [2024-12-11 08:41:24.613539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.043 [2024-12-11 08:41:24.613548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.613981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.613990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.614018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.044 [2024-12-11 08:41:24.614039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672e30 is same with the state(6) to be set 00:07:17.044 [2024-12-11 08:41:24.614225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.044 [2024-12-11 08:41:24.614244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.044 [2024-12-11 08:41:24.614264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.044 [2024-12-11 08:41:24.614287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:17.044 [2024-12-11 08:41:24.614307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.044 [2024-12-11 08:41:24.614315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66e9d0 is same with the state(6) to be set 00:07:17.044 [2024-12-11 08:41:24.615453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:17.044 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:17.044 00:07:17.044 Latency(us) 00:07:17.044 [2024-12-11T08:41:24.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.044 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:17.044 Job: Nvme0n1 ended in about 0.46 seconds with error 00:07:17.044 Verification LBA range: start 0x0 length 0x400 00:07:17.044 Nvme0n1 : 0.46 1394.53 87.16 139.45 0.00 40106.75 2159.71 45041.11 00:07:17.044 [2024-12-11T08:41:24.818Z] =================================================================================================================== 00:07:17.044 [2024-12-11T08:41:24.818Z] Total : 1394.53 87.16 139.45 0.00 40106.75 2159.71 45041.11 00:07:17.044 [2024-12-11 08:41:24.617552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.044 [2024-12-11 08:41:24.617592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66e9d0 (9): Bad file descriptor 00:07:17.044 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.044 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:17.044 [2024-12-11 08:41:24.623164] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 63348 00:07:17.979 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (63348) - No such process 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.979 { 00:07:17.979 "params": { 00:07:17.979 "name": "Nvme$subsystem", 00:07:17.979 "trtype": "$TEST_TRANSPORT", 00:07:17.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.979 "adrfam": "ipv4", 00:07:17.979 "trsvcid": "$NVMF_PORT", 00:07:17.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.979 "hdgst": ${hdgst:-false}, 00:07:17.979 "ddgst": ${ddgst:-false} 00:07:17.979 }, 00:07:17.979 "method": "bdev_nvme_attach_controller" 00:07:17.979 } 00:07:17.979 EOF 00:07:17.979 )") 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.979 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.979 "params": { 00:07:17.979 "name": "Nvme0", 00:07:17.979 "trtype": "tcp", 00:07:17.979 "traddr": "10.0.0.3", 00:07:17.979 "adrfam": "ipv4", 00:07:17.979 "trsvcid": "4420", 00:07:17.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.979 "hdgst": false, 00:07:17.979 "ddgst": false 00:07:17.979 }, 00:07:17.979 "method": "bdev_nvme_attach_controller" 00:07:17.979 }' 00:07:17.979 [2024-12-11 08:41:25.683647] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:17.979 [2024-12-11 08:41:25.683750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:07:18.237 [2024-12-11 08:41:25.826433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.237 [2024-12-11 08:41:25.877264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.237 [2024-12-11 08:41:25.924389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.494 Running I/O for 1 seconds... 00:07:19.428 1542.00 IOPS, 96.38 MiB/s 00:07:19.428 Latency(us) 00:07:19.428 [2024-12-11T08:41:27.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.428 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:19.428 Verification LBA range: start 0x0 length 0x400 00:07:19.428 Nvme0n1 : 1.04 1602.84 100.18 0.00 0.00 39039.03 3544.90 42181.35 00:07:19.428 [2024-12-11T08:41:27.202Z] =================================================================================================================== 00:07:19.428 [2024-12-11T08:41:27.202Z] Total : 1602.84 100.18 0.00 0.00 39039.03 3544.90 42181.35 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.687 rmmod nvme_tcp 00:07:19.687 rmmod nvme_fabrics 00:07:19.687 rmmod nvme_keyring 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 63296 ']' 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 63296 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 63296 ']' 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 63296 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63296 00:07:19.687 killing process with pid 63296 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63296' 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 63296 00:07:19.687 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 63296 00:07:19.687 [2024-12-11 08:41:27.457406] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.946 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:20.205 00:07:20.205 real 0m5.209s 00:07:20.205 user 0m18.179s 00:07:20.205 sys 0m1.413s 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.205 ************************************ 00:07:20.205 END TEST nvmf_host_management 00:07:20.205 ************************************ 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.205 ************************************ 00:07:20.205 START TEST nvmf_lvol 00:07:20.205 ************************************ 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:20.205 * Looking for test storage... 00:07:20.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.205 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.205 --rc genhtml_branch_coverage=1 00:07:20.205 --rc genhtml_function_coverage=1 00:07:20.205 --rc genhtml_legend=1 00:07:20.206 --rc geninfo_all_blocks=1 00:07:20.206 --rc geninfo_unexecuted_blocks=1 00:07:20.206 00:07:20.206 ' 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.206 --rc genhtml_branch_coverage=1 00:07:20.206 --rc genhtml_function_coverage=1 00:07:20.206 --rc genhtml_legend=1 00:07:20.206 --rc geninfo_all_blocks=1 00:07:20.206 --rc geninfo_unexecuted_blocks=1 00:07:20.206 00:07:20.206 ' 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.206 --rc genhtml_branch_coverage=1 00:07:20.206 --rc genhtml_function_coverage=1 00:07:20.206 --rc genhtml_legend=1 00:07:20.206 --rc geninfo_all_blocks=1 00:07:20.206 --rc geninfo_unexecuted_blocks=1 00:07:20.206 00:07:20.206 ' 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.206 --rc genhtml_branch_coverage=1 00:07:20.206 --rc genhtml_function_coverage=1 00:07:20.206 --rc genhtml_legend=1 00:07:20.206 --rc geninfo_all_blocks=1 00:07:20.206 --rc geninfo_unexecuted_blocks=1 00:07:20.206 00:07:20.206 ' 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.206 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.465 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.465 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.466 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.466 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.466 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:20.466 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:20.466 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.466 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:20.466 Cannot find device "nvmf_init_br" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:20.466 Cannot find device "nvmf_init_br2" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:20.466 Cannot find device "nvmf_tgt_br" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.466 Cannot find device "nvmf_tgt_br2" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:20.466 Cannot find device "nvmf_init_br" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:20.466 Cannot find device "nvmf_init_br2" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:20.466 Cannot find device "nvmf_tgt_br" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:20.466 Cannot find device "nvmf_tgt_br2" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:20.466 Cannot find device "nvmf_br" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:20.466 Cannot find device "nvmf_init_if" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:20.466 Cannot find device "nvmf_init_if2" 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:20.466 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:20.725 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:20.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:20.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:20.726 00:07:20.726 --- 10.0.0.3 ping statistics --- 00:07:20.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.726 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:20.726 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:20.726 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:07:20.726 00:07:20.726 --- 10.0.0.4 ping statistics --- 00:07:20.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.726 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:20.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:20.726 00:07:20.726 --- 10.0.0.1 ping statistics --- 00:07:20.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.726 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:20.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:07:20.726 00:07:20.726 --- 10.0.0.2 ping statistics --- 00:07:20.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.726 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63651 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63651 00:07:20.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63651 ']' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.726 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.726 [2024-12-11 08:41:28.485858] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:20.726 [2024-12-11 08:41:28.486176] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.984 [2024-12-11 08:41:28.636210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.984 [2024-12-11 08:41:28.676243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.984 [2024-12-11 08:41:28.676289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.984 [2024-12-11 08:41:28.676302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.984 [2024-12-11 08:41:28.676312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.984 [2024-12-11 08:41:28.676321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.984 [2024-12-11 08:41:28.677217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.984 [2024-12-11 08:41:28.677723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.984 [2024-12-11 08:41:28.677737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.984 [2024-12-11 08:41:28.710558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:21.501 [2024-12-11 08:41:29.097817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.501 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:21.759 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:21.759 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:22.017 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:22.017 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:22.583 08:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:22.583 08:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=deef80c3-76f2-4f56-a99e-99eb34fa19e7 00:07:22.583 08:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u deef80c3-76f2-4f56-a99e-99eb34fa19e7 lvol 20 00:07:23.149 08:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=aac6608a-5047-4c43-9bad-e7488aeff351 00:07:23.149 08:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.149 08:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aac6608a-5047-4c43-9bad-e7488aeff351 00:07:23.432 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:23.727 [2024-12-11 08:41:31.434637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:23.727 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:23.988 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63719 00:07:23.988 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:23.988 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:25.362 08:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot aac6608a-5047-4c43-9bad-e7488aeff351 MY_SNAPSHOT 00:07:25.362 08:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2a004ce7-7822-46d8-b893-26e18e5de5d6 00:07:25.362 08:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize aac6608a-5047-4c43-9bad-e7488aeff351 30 00:07:25.928 08:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2a004ce7-7822-46d8-b893-26e18e5de5d6 MY_CLONE 00:07:26.186 08:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7dbdb1ac-2fd8-4b32-96b0-4fc759dd2f05 00:07:26.186 08:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7dbdb1ac-2fd8-4b32-96b0-4fc759dd2f05 00:07:26.444 08:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63719 00:07:34.553 Initializing NVMe Controllers 00:07:34.553 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:34.553 Controller IO queue size 128, less than required. 00:07:34.553 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:34.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:34.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:34.553 Initialization complete. Launching workers. 00:07:34.553 ======================================================== 00:07:34.553 Latency(us) 00:07:34.553 Device Information : IOPS MiB/s Average min max 00:07:34.553 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10360.90 40.47 12362.97 255.36 71326.63 00:07:34.553 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10267.80 40.11 12468.52 2995.37 71518.07 00:07:34.553 ======================================================== 00:07:34.553 Total : 20628.70 80.58 12415.50 255.36 71518.07 00:07:34.553 00:07:34.553 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:34.553 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete aac6608a-5047-4c43-9bad-e7488aeff351 00:07:34.811 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u deef80c3-76f2-4f56-a99e-99eb34fa19e7 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.378 rmmod nvme_tcp 00:07:35.378 rmmod nvme_fabrics 00:07:35.378 rmmod nvme_keyring 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63651 ']' 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63651 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63651 ']' 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63651 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.378 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63651 00:07:35.378 killing process with pid 63651 00:07:35.378 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.378 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.378 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63651' 00:07:35.378 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63651 00:07:35.378 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63651 00:07:35.636 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:35.636 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.637 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:35.896 00:07:35.896 real 0m15.645s 00:07:35.896 user 1m4.880s 00:07:35.896 sys 0m4.093s 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.896 ************************************ 00:07:35.896 END TEST nvmf_lvol 00:07:35.896 ************************************ 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.896 ************************************ 00:07:35.896 START TEST nvmf_lvs_grow 00:07:35.896 ************************************ 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.896 * Looking for test storage... 00:07:35.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.896 --rc genhtml_branch_coverage=1 00:07:35.896 --rc genhtml_function_coverage=1 00:07:35.896 --rc genhtml_legend=1 00:07:35.896 --rc geninfo_all_blocks=1 00:07:35.896 --rc geninfo_unexecuted_blocks=1 00:07:35.896 00:07:35.896 ' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.896 --rc genhtml_branch_coverage=1 00:07:35.896 --rc genhtml_function_coverage=1 00:07:35.896 --rc genhtml_legend=1 00:07:35.896 --rc geninfo_all_blocks=1 00:07:35.896 --rc geninfo_unexecuted_blocks=1 00:07:35.896 00:07:35.896 ' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.896 --rc genhtml_branch_coverage=1 00:07:35.896 --rc genhtml_function_coverage=1 00:07:35.896 --rc genhtml_legend=1 00:07:35.896 --rc geninfo_all_blocks=1 00:07:35.896 --rc geninfo_unexecuted_blocks=1 00:07:35.896 00:07:35.896 ' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.896 --rc genhtml_branch_coverage=1 00:07:35.896 --rc genhtml_function_coverage=1 00:07:35.896 --rc genhtml_legend=1 00:07:35.896 --rc geninfo_all_blocks=1 00:07:35.896 --rc geninfo_unexecuted_blocks=1 00:07:35.896 00:07:35.896 ' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.896 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.155 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:36.156 Cannot find device "nvmf_init_br" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:36.156 Cannot find device "nvmf_init_br2" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:36.156 Cannot find device "nvmf_tgt_br" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.156 Cannot find device "nvmf_tgt_br2" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:36.156 Cannot find device "nvmf_init_br" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:36.156 Cannot find device "nvmf_init_br2" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:36.156 Cannot find device "nvmf_tgt_br" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:36.156 Cannot find device "nvmf_tgt_br2" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:36.156 Cannot find device "nvmf_br" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:36.156 Cannot find device "nvmf_init_if" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:36.156 Cannot find device "nvmf_init_if2" 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:36.156 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:36.415 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:36.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:36.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:07:36.415 00:07:36.415 --- 10.0.0.3 ping statistics --- 00:07:36.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.415 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:36.415 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:36.415 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:07:36.415 00:07:36.415 --- 10.0.0.4 ping statistics --- 00:07:36.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.415 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:36.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:36.415 00:07:36.415 --- 10.0.0.1 ping statistics --- 00:07:36.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.415 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:36.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:07:36.415 00:07:36.415 --- 10.0.0.2 ping statistics --- 00:07:36.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.415 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.415 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=64101 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 64101 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 64101 ']' 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.416 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.416 [2024-12-11 08:41:44.165155] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:36.416 [2024-12-11 08:41:44.165249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.674 [2024-12-11 08:41:44.315209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.674 [2024-12-11 08:41:44.346598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.674 [2024-12-11 08:41:44.346654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.674 [2024-12-11 08:41:44.346666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.674 [2024-12-11 08:41:44.346674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.674 [2024-12-11 08:41:44.346681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.674 [2024-12-11 08:41:44.346985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.674 [2024-12-11 08:41:44.376038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.607 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:37.865 [2024-12-11 08:41:45.451996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:37.865 ************************************ 00:07:37.865 START TEST lvs_grow_clean 00:07:37.865 ************************************ 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:37.865 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.122 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:38.122 08:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:38.380 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a2fe116-60c2-408b-8856-aa217446de6b 00:07:38.380 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:38.380 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:38.638 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:38.638 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:38.638 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3a2fe116-60c2-408b-8856-aa217446de6b lvol 150 00:07:38.897 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f67c99fb-25c7-4742-af05-5e21c5f67389 00:07:38.897 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:38.897 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:39.155 [2024-12-11 08:41:46.866153] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:39.155 [2024-12-11 08:41:46.866256] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:39.155 true 00:07:39.155 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:39.155 08:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:39.416 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:39.416 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:39.674 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f67c99fb-25c7-4742-af05-5e21c5f67389 00:07:40.241 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:40.241 [2024-12-11 08:41:47.990748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:40.241 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64189 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64189 /var/tmp/bdevperf.sock 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 64189 ']' 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.808 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:40.808 [2024-12-11 08:41:48.325200] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:40.809 [2024-12-11 08:41:48.325292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64189 ] 00:07:40.809 [2024-12-11 08:41:48.472364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.809 [2024-12-11 08:41:48.511890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.809 [2024-12-11 08:41:48.545364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.067 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.067 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:41.067 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:41.326 Nvme0n1 00:07:41.326 08:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:41.584 [ 00:07:41.584 { 00:07:41.584 "name": "Nvme0n1", 00:07:41.584 "aliases": [ 00:07:41.584 "f67c99fb-25c7-4742-af05-5e21c5f67389" 00:07:41.584 ], 00:07:41.584 "product_name": "NVMe disk", 00:07:41.584 "block_size": 4096, 00:07:41.584 "num_blocks": 38912, 00:07:41.584 "uuid": "f67c99fb-25c7-4742-af05-5e21c5f67389", 00:07:41.584 "numa_id": -1, 00:07:41.584 "assigned_rate_limits": { 00:07:41.584 "rw_ios_per_sec": 0, 00:07:41.584 "rw_mbytes_per_sec": 0, 00:07:41.584 "r_mbytes_per_sec": 0, 00:07:41.584 "w_mbytes_per_sec": 0 00:07:41.584 }, 00:07:41.584 "claimed": false, 00:07:41.584 "zoned": false, 00:07:41.584 "supported_io_types": { 00:07:41.584 "read": true, 00:07:41.584 "write": true, 00:07:41.584 "unmap": true, 00:07:41.584 "flush": true, 00:07:41.584 "reset": true, 00:07:41.584 "nvme_admin": true, 00:07:41.584 "nvme_io": true, 00:07:41.584 "nvme_io_md": false, 00:07:41.584 "write_zeroes": true, 00:07:41.584 "zcopy": false, 00:07:41.584 "get_zone_info": false, 00:07:41.584 "zone_management": false, 00:07:41.584 "zone_append": false, 00:07:41.584 "compare": true, 00:07:41.584 "compare_and_write": true, 00:07:41.584 "abort": true, 00:07:41.584 "seek_hole": false, 00:07:41.584 "seek_data": false, 00:07:41.584 "copy": true, 00:07:41.584 "nvme_iov_md": false 00:07:41.584 }, 00:07:41.584 "memory_domains": [ 00:07:41.584 { 00:07:41.584 "dma_device_id": "system", 00:07:41.584 "dma_device_type": 1 00:07:41.584 } 00:07:41.584 ], 00:07:41.584 "driver_specific": { 00:07:41.584 "nvme": [ 00:07:41.584 { 00:07:41.584 "trid": { 00:07:41.584 "trtype": "TCP", 00:07:41.584 "adrfam": "IPv4", 00:07:41.584 "traddr": "10.0.0.3", 00:07:41.584 "trsvcid": "4420", 00:07:41.584 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:41.584 }, 00:07:41.584 "ctrlr_data": { 00:07:41.584 "cntlid": 1, 00:07:41.585 "vendor_id": "0x8086", 00:07:41.585 "model_number": "SPDK bdev Controller", 00:07:41.585 "serial_number": "SPDK0", 00:07:41.585 "firmware_revision": "25.01", 00:07:41.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.585 "oacs": { 00:07:41.585 "security": 0, 00:07:41.585 "format": 0, 00:07:41.585 "firmware": 0, 00:07:41.585 "ns_manage": 0 00:07:41.585 }, 00:07:41.585 "multi_ctrlr": true, 00:07:41.585 "ana_reporting": false 00:07:41.585 }, 00:07:41.585 "vs": { 00:07:41.585 "nvme_version": "1.3" 00:07:41.585 }, 00:07:41.585 "ns_data": { 00:07:41.585 "id": 1, 00:07:41.585 "can_share": true 00:07:41.585 } 00:07:41.585 } 00:07:41.585 ], 00:07:41.585 "mp_policy": "active_passive" 00:07:41.585 } 00:07:41.585 } 00:07:41.585 ] 00:07:41.585 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64205 00:07:41.585 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.585 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:41.585 Running I/O for 10 seconds... 00:07:42.521 Latency(us) 00:07:42.521 [2024-12-11T08:41:50.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.521 Nvme0n1 : 1.00 7146.00 27.91 0.00 0.00 0.00 0.00 0.00 00:07:42.521 [2024-12-11T08:41:50.295Z] =================================================================================================================== 00:07:42.521 [2024-12-11T08:41:50.295Z] Total : 7146.00 27.91 0.00 0.00 0.00 0.00 0.00 00:07:42.521 00:07:43.457 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:43.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.715 Nvme0n1 : 2.00 6811.50 26.61 0.00 0.00 0.00 0.00 0.00 00:07:43.715 [2024-12-11T08:41:51.489Z] =================================================================================================================== 00:07:43.715 [2024-12-11T08:41:51.489Z] Total : 6811.50 26.61 0.00 0.00 0.00 0.00 0.00 00:07:43.715 00:07:43.974 true 00:07:43.974 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:43.974 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:44.232 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:44.232 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:44.232 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 64205 00:07:44.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.800 Nvme0n1 : 3.00 6742.33 26.34 0.00 0.00 0.00 0.00 0.00 00:07:44.800 [2024-12-11T08:41:52.574Z] =================================================================================================================== 00:07:44.800 [2024-12-11T08:41:52.574Z] Total : 6742.33 26.34 0.00 0.00 0.00 0.00 0.00 00:07:44.800 00:07:45.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.736 Nvme0n1 : 4.00 6723.75 26.26 0.00 0.00 0.00 0.00 0.00 00:07:45.736 [2024-12-11T08:41:53.510Z] =================================================================================================================== 00:07:45.736 [2024-12-11T08:41:53.510Z] Total : 6723.75 26.26 0.00 0.00 0.00 0.00 0.00 00:07:45.736 00:07:46.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.671 Nvme0n1 : 5.00 6712.40 26.22 0.00 0.00 0.00 0.00 0.00 00:07:46.671 [2024-12-11T08:41:54.445Z] =================================================================================================================== 00:07:46.671 [2024-12-11T08:41:54.445Z] Total : 6712.40 26.22 0.00 0.00 0.00 0.00 0.00 00:07:46.671 00:07:47.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.608 Nvme0n1 : 6.00 6594.17 25.76 0.00 0.00 0.00 0.00 0.00 00:07:47.608 [2024-12-11T08:41:55.382Z] =================================================================================================================== 00:07:47.608 [2024-12-11T08:41:55.382Z] Total : 6594.17 25.76 0.00 0.00 0.00 0.00 0.00 00:07:47.608 00:07:48.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.544 Nvme0n1 : 7.00 6650.00 25.98 0.00 0.00 0.00 0.00 0.00 00:07:48.544 [2024-12-11T08:41:56.318Z] =================================================================================================================== 00:07:48.544 [2024-12-11T08:41:56.318Z] Total : 6650.00 25.98 0.00 0.00 0.00 0.00 0.00 00:07:48.544 00:07:49.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.921 Nvme0n1 : 8.00 6676.00 26.08 0.00 0.00 0.00 0.00 0.00 00:07:49.921 [2024-12-11T08:41:57.695Z] =================================================================================================================== 00:07:49.921 [2024-12-11T08:41:57.695Z] Total : 6676.00 26.08 0.00 0.00 0.00 0.00 0.00 00:07:49.921 00:07:50.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.858 Nvme0n1 : 9.00 6710.33 26.21 0.00 0.00 0.00 0.00 0.00 00:07:50.858 [2024-12-11T08:41:58.632Z] =================================================================================================================== 00:07:50.858 [2024-12-11T08:41:58.632Z] Total : 6710.33 26.21 0.00 0.00 0.00 0.00 0.00 00:07:50.858 00:07:51.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.842 Nvme0n1 : 10.00 6712.40 26.22 0.00 0.00 0.00 0.00 0.00 00:07:51.842 [2024-12-11T08:41:59.616Z] =================================================================================================================== 00:07:51.842 [2024-12-11T08:41:59.616Z] Total : 6712.40 26.22 0.00 0.00 0.00 0.00 0.00 00:07:51.842 00:07:51.842 00:07:51.842 Latency(us) 00:07:51.842 [2024-12-11T08:41:59.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.842 Nvme0n1 : 10.01 6721.65 26.26 0.00 0.00 19037.12 9234.62 125829.12 00:07:51.842 [2024-12-11T08:41:59.616Z] =================================================================================================================== 00:07:51.842 [2024-12-11T08:41:59.616Z] Total : 6721.65 26.26 0.00 0.00 19037.12 9234.62 125829.12 00:07:51.842 { 00:07:51.842 "results": [ 00:07:51.842 { 00:07:51.842 "job": "Nvme0n1", 00:07:51.842 "core_mask": "0x2", 00:07:51.842 "workload": "randwrite", 00:07:51.842 "status": "finished", 00:07:51.842 "queue_depth": 128, 00:07:51.842 "io_size": 4096, 00:07:51.842 "runtime": 10.005277, 00:07:51.842 "iops": 6721.652983720491, 00:07:51.842 "mibps": 26.256456967658167, 00:07:51.842 "io_failed": 0, 00:07:51.842 "io_timeout": 0, 00:07:51.842 "avg_latency_us": 19037.119589927708, 00:07:51.842 "min_latency_us": 9234.618181818181, 00:07:51.842 "max_latency_us": 125829.12 00:07:51.842 } 00:07:51.842 ], 00:07:51.842 "core_count": 1 00:07:51.842 } 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64189 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 64189 ']' 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 64189 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64189 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.842 killing process with pid 64189 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64189' 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 64189 00:07:51.842 Received shutdown signal, test time was about 10.000000 seconds 00:07:51.842 00:07:51.842 Latency(us) 00:07:51.842 [2024-12-11T08:41:59.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.842 [2024-12-11T08:41:59.616Z] =================================================================================================================== 00:07:51.842 [2024-12-11T08:41:59.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 64189 00:07:51.842 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:52.101 08:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.360 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:52.360 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:52.927 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:52.927 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:52.927 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.927 [2024-12-11 08:42:00.696807] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.186 08:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:53.444 request: 00:07:53.444 { 00:07:53.444 "uuid": "3a2fe116-60c2-408b-8856-aa217446de6b", 00:07:53.444 "method": "bdev_lvol_get_lvstores", 00:07:53.444 "req_id": 1 00:07:53.444 } 00:07:53.444 Got JSON-RPC error response 00:07:53.444 response: 00:07:53.444 { 00:07:53.444 "code": -19, 00:07:53.444 "message": "No such device" 00:07:53.444 } 00:07:53.444 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:53.444 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.445 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.445 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.445 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.703 aio_bdev 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f67c99fb-25c7-4742-af05-5e21c5f67389 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f67c99fb-25c7-4742-af05-5e21c5f67389 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.703 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:53.961 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f67c99fb-25c7-4742-af05-5e21c5f67389 -t 2000 00:07:54.219 [ 00:07:54.219 { 00:07:54.219 "name": "f67c99fb-25c7-4742-af05-5e21c5f67389", 00:07:54.219 "aliases": [ 00:07:54.219 "lvs/lvol" 00:07:54.219 ], 00:07:54.219 "product_name": "Logical Volume", 00:07:54.219 "block_size": 4096, 00:07:54.219 "num_blocks": 38912, 00:07:54.219 "uuid": "f67c99fb-25c7-4742-af05-5e21c5f67389", 00:07:54.219 "assigned_rate_limits": { 00:07:54.219 "rw_ios_per_sec": 0, 00:07:54.219 "rw_mbytes_per_sec": 0, 00:07:54.219 "r_mbytes_per_sec": 0, 00:07:54.219 "w_mbytes_per_sec": 0 00:07:54.219 }, 00:07:54.219 "claimed": false, 00:07:54.219 "zoned": false, 00:07:54.219 "supported_io_types": { 00:07:54.219 "read": true, 00:07:54.219 "write": true, 00:07:54.219 "unmap": true, 00:07:54.219 "flush": false, 00:07:54.219 "reset": true, 00:07:54.219 "nvme_admin": false, 00:07:54.219 "nvme_io": false, 00:07:54.219 "nvme_io_md": false, 00:07:54.219 "write_zeroes": true, 00:07:54.219 "zcopy": false, 00:07:54.219 "get_zone_info": false, 00:07:54.219 "zone_management": false, 00:07:54.219 "zone_append": false, 00:07:54.219 "compare": false, 00:07:54.219 "compare_and_write": false, 00:07:54.219 "abort": false, 00:07:54.219 "seek_hole": true, 00:07:54.219 "seek_data": true, 00:07:54.219 "copy": false, 00:07:54.219 "nvme_iov_md": false 00:07:54.219 }, 00:07:54.219 "driver_specific": { 00:07:54.219 "lvol": { 00:07:54.219 "lvol_store_uuid": "3a2fe116-60c2-408b-8856-aa217446de6b", 00:07:54.219 "base_bdev": "aio_bdev", 00:07:54.219 "thin_provision": false, 00:07:54.219 "num_allocated_clusters": 38, 00:07:54.219 "snapshot": false, 00:07:54.219 "clone": false, 00:07:54.219 "esnap_clone": false 00:07:54.219 } 00:07:54.219 } 00:07:54.219 } 00:07:54.219 ] 00:07:54.219 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:54.219 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:54.219 08:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:54.477 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:54.477 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:54.477 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:54.736 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:54.736 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f67c99fb-25c7-4742-af05-5e21c5f67389 00:07:54.995 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a2fe116-60c2-408b-8856-aa217446de6b 00:07:55.254 08:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.513 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.081 ************************************ 00:07:56.081 END TEST lvs_grow_clean 00:07:56.081 ************************************ 00:07:56.081 00:07:56.081 real 0m18.068s 00:07:56.081 user 0m17.055s 00:07:56.081 sys 0m2.352s 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.081 ************************************ 00:07:56.081 START TEST lvs_grow_dirty 00:07:56.081 ************************************ 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:56.081 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:56.082 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:56.082 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:56.082 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.082 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.082 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.340 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:56.340 08:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:56.599 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a88c81b6-a19b-482b-b42e-b301603d7847 00:07:56.599 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:07:56.599 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:56.859 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:56.859 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:56.859 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a88c81b6-a19b-482b-b42e-b301603d7847 lvol 150 00:07:57.118 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ead54256-ea73-4095-a377-36574c333492 00:07:57.118 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:57.118 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:57.377 [2024-12-11 08:42:04.928804] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:57.377 [2024-12-11 08:42:04.928893] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:57.377 true 00:07:57.377 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:57.377 08:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:07:57.636 08:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:57.636 08:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.894 08:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ead54256-ea73-4095-a377-36574c333492 00:07:58.153 08:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:58.412 [2024-12-11 08:42:05.977396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:58.412 08:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:58.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64451 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64451 /var/tmp/bdevperf.sock 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64451 ']' 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.672 08:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:58.672 [2024-12-11 08:42:06.275959] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:07:58.672 [2024-12-11 08:42:06.276309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64451 ] 00:07:58.672 [2024-12-11 08:42:06.425638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.931 [2024-12-11 08:42:06.458733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.931 [2024-12-11 08:42:06.490624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.529 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.529 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:59.529 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:00.096 Nvme0n1 00:08:00.096 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:00.096 [ 00:08:00.096 { 00:08:00.096 "name": "Nvme0n1", 00:08:00.096 "aliases": [ 00:08:00.096 "ead54256-ea73-4095-a377-36574c333492" 00:08:00.096 ], 00:08:00.096 "product_name": "NVMe disk", 00:08:00.096 "block_size": 4096, 00:08:00.096 "num_blocks": 38912, 00:08:00.096 "uuid": "ead54256-ea73-4095-a377-36574c333492", 00:08:00.096 "numa_id": -1, 00:08:00.096 "assigned_rate_limits": { 00:08:00.096 "rw_ios_per_sec": 0, 00:08:00.096 "rw_mbytes_per_sec": 0, 00:08:00.096 "r_mbytes_per_sec": 0, 00:08:00.096 "w_mbytes_per_sec": 0 00:08:00.096 }, 00:08:00.096 "claimed": false, 00:08:00.096 "zoned": false, 00:08:00.096 "supported_io_types": { 00:08:00.096 "read": true, 00:08:00.096 "write": true, 00:08:00.096 "unmap": true, 00:08:00.096 "flush": true, 00:08:00.096 "reset": true, 00:08:00.096 "nvme_admin": true, 00:08:00.096 "nvme_io": true, 00:08:00.096 "nvme_io_md": false, 00:08:00.096 "write_zeroes": true, 00:08:00.096 "zcopy": false, 00:08:00.096 "get_zone_info": false, 00:08:00.096 "zone_management": false, 00:08:00.096 "zone_append": false, 00:08:00.096 "compare": true, 00:08:00.096 "compare_and_write": true, 00:08:00.096 "abort": true, 00:08:00.096 "seek_hole": false, 00:08:00.096 "seek_data": false, 00:08:00.096 "copy": true, 00:08:00.096 "nvme_iov_md": false 00:08:00.096 }, 00:08:00.096 "memory_domains": [ 00:08:00.096 { 00:08:00.096 "dma_device_id": "system", 00:08:00.096 "dma_device_type": 1 00:08:00.096 } 00:08:00.096 ], 00:08:00.096 "driver_specific": { 00:08:00.096 "nvme": [ 00:08:00.096 { 00:08:00.096 "trid": { 00:08:00.096 "trtype": "TCP", 00:08:00.096 "adrfam": "IPv4", 00:08:00.096 "traddr": "10.0.0.3", 00:08:00.096 "trsvcid": "4420", 00:08:00.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:00.096 }, 00:08:00.096 "ctrlr_data": { 00:08:00.096 "cntlid": 1, 00:08:00.096 "vendor_id": "0x8086", 00:08:00.096 "model_number": "SPDK bdev Controller", 00:08:00.096 "serial_number": "SPDK0", 00:08:00.096 "firmware_revision": "25.01", 00:08:00.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.096 "oacs": { 00:08:00.096 "security": 0, 00:08:00.096 "format": 0, 00:08:00.096 "firmware": 0, 00:08:00.096 "ns_manage": 0 00:08:00.096 }, 00:08:00.096 "multi_ctrlr": true, 00:08:00.096 "ana_reporting": false 00:08:00.096 }, 00:08:00.096 "vs": { 00:08:00.096 "nvme_version": "1.3" 00:08:00.096 }, 00:08:00.096 "ns_data": { 00:08:00.096 "id": 1, 00:08:00.096 "can_share": true 00:08:00.096 } 00:08:00.096 } 00:08:00.096 ], 00:08:00.096 "mp_policy": "active_passive" 00:08:00.096 } 00:08:00.096 } 00:08:00.096 ] 00:08:00.096 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.096 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64475 00:08:00.096 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:00.355 Running I/O for 10 seconds... 00:08:01.292 Latency(us) 00:08:01.292 [2024-12-11T08:42:09.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.292 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:01.292 [2024-12-11T08:42:09.066Z] =================================================================================================================== 00:08:01.292 [2024-12-11T08:42:09.066Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:01.292 00:08:02.229 08:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:02.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.229 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:02.229 [2024-12-11T08:42:10.003Z] =================================================================================================================== 00:08:02.229 [2024-12-11T08:42:10.003Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:02.229 00:08:02.487 true 00:08:02.487 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:02.487 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:02.746 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:02.746 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:02.746 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 64475 00:08:03.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.313 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:03.313 [2024-12-11T08:42:11.087Z] =================================================================================================================== 00:08:03.313 [2024-12-11T08:42:11.087Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:03.313 00:08:04.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.246 Nvme0n1 : 4.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:04.246 [2024-12-11T08:42:12.020Z] =================================================================================================================== 00:08:04.246 [2024-12-11T08:42:12.020Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:04.246 00:08:05.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.624 Nvme0n1 : 5.00 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:08:05.624 [2024-12-11T08:42:13.398Z] =================================================================================================================== 00:08:05.624 [2024-12-11T08:42:13.399Z] Total : 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:08:05.625 00:08:06.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.561 Nvme0n1 : 6.00 6589.50 25.74 0.00 0.00 0.00 0.00 0.00 00:08:06.561 [2024-12-11T08:42:14.335Z] =================================================================================================================== 00:08:06.561 [2024-12-11T08:42:14.335Z] Total : 6589.50 25.74 0.00 0.00 0.00 0.00 0.00 00:08:06.561 00:08:07.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.513 Nvme0n1 : 7.00 6519.00 25.46 0.00 0.00 0.00 0.00 0.00 00:08:07.513 [2024-12-11T08:42:15.287Z] =================================================================================================================== 00:08:07.513 [2024-12-11T08:42:15.287Z] Total : 6519.00 25.46 0.00 0.00 0.00 0.00 0.00 00:08:07.513 00:08:08.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.466 Nvme0n1 : 8.00 6482.00 25.32 0.00 0.00 0.00 0.00 0.00 00:08:08.466 [2024-12-11T08:42:16.240Z] =================================================================================================================== 00:08:08.466 [2024-12-11T08:42:16.240Z] Total : 6482.00 25.32 0.00 0.00 0.00 0.00 0.00 00:08:08.466 00:08:09.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.403 Nvme0n1 : 9.00 6453.22 25.21 0.00 0.00 0.00 0.00 0.00 00:08:09.403 [2024-12-11T08:42:17.177Z] =================================================================================================================== 00:08:09.403 [2024-12-11T08:42:17.177Z] Total : 6453.22 25.21 0.00 0.00 0.00 0.00 0.00 00:08:09.403 00:08:10.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.342 Nvme0n1 : 10.00 6430.20 25.12 0.00 0.00 0.00 0.00 0.00 00:08:10.342 [2024-12-11T08:42:18.116Z] =================================================================================================================== 00:08:10.342 [2024-12-11T08:42:18.116Z] Total : 6430.20 25.12 0.00 0.00 0.00 0.00 0.00 00:08:10.342 00:08:10.342 00:08:10.342 Latency(us) 00:08:10.342 [2024-12-11T08:42:18.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.342 Nvme0n1 : 10.02 6431.23 25.12 0.00 0.00 19897.58 10604.92 72447.07 00:08:10.342 [2024-12-11T08:42:18.116Z] =================================================================================================================== 00:08:10.342 [2024-12-11T08:42:18.116Z] Total : 6431.23 25.12 0.00 0.00 19897.58 10604.92 72447.07 00:08:10.342 { 00:08:10.342 "results": [ 00:08:10.342 { 00:08:10.342 "job": "Nvme0n1", 00:08:10.342 "core_mask": "0x2", 00:08:10.342 "workload": "randwrite", 00:08:10.342 "status": "finished", 00:08:10.342 "queue_depth": 128, 00:08:10.342 "io_size": 4096, 00:08:10.342 "runtime": 10.018298, 00:08:10.342 "iops": 6431.23213144588, 00:08:10.342 "mibps": 25.12200051346047, 00:08:10.342 "io_failed": 0, 00:08:10.342 "io_timeout": 0, 00:08:10.342 "avg_latency_us": 19897.58096787211, 00:08:10.342 "min_latency_us": 10604.916363636363, 00:08:10.342 "max_latency_us": 72447.06909090909 00:08:10.342 } 00:08:10.342 ], 00:08:10.342 "core_count": 1 00:08:10.342 } 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64451 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 64451 ']' 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 64451 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64451 00:08:10.342 killing process with pid 64451 00:08:10.342 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.342 00:08:10.342 Latency(us) 00:08:10.342 [2024-12-11T08:42:18.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.342 [2024-12-11T08:42:18.116Z] =================================================================================================================== 00:08:10.342 [2024-12-11T08:42:18.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64451' 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 64451 00:08:10.342 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 64451 00:08:10.602 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:10.861 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:11.119 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:11.120 08:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:11.378 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:11.378 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:11.378 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64101 00:08:11.378 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64101 00:08:11.378 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64101 Killed "${NVMF_APP[@]}" "$@" 00:08:11.378 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64613 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64613 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64613 ']' 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.379 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.379 [2024-12-11 08:42:19.138652] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:11.379 [2024-12-11 08:42:19.138736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.637 [2024-12-11 08:42:19.275907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.637 [2024-12-11 08:42:19.314427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.637 [2024-12-11 08:42:19.314788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.637 [2024-12-11 08:42:19.314834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.637 [2024-12-11 08:42:19.314849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.637 [2024-12-11 08:42:19.314861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.638 [2024-12-11 08:42:19.315306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.638 [2024-12-11 08:42:19.346408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.638 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.638 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:11.638 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.638 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.638 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.896 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.896 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.155 [2024-12-11 08:42:19.708234] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:12.155 [2024-12-11 08:42:19.708663] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:12.155 [2024-12-11 08:42:19.708853] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ead54256-ea73-4095-a377-36574c333492 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ead54256-ea73-4095-a377-36574c333492 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.155 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.413 08:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ead54256-ea73-4095-a377-36574c333492 -t 2000 00:08:12.672 [ 00:08:12.672 { 00:08:12.672 "name": "ead54256-ea73-4095-a377-36574c333492", 00:08:12.672 "aliases": [ 00:08:12.672 "lvs/lvol" 00:08:12.672 ], 00:08:12.672 "product_name": "Logical Volume", 00:08:12.672 "block_size": 4096, 00:08:12.672 "num_blocks": 38912, 00:08:12.672 "uuid": "ead54256-ea73-4095-a377-36574c333492", 00:08:12.672 "assigned_rate_limits": { 00:08:12.672 "rw_ios_per_sec": 0, 00:08:12.672 "rw_mbytes_per_sec": 0, 00:08:12.672 "r_mbytes_per_sec": 0, 00:08:12.672 "w_mbytes_per_sec": 0 00:08:12.672 }, 00:08:12.672 "claimed": false, 00:08:12.672 "zoned": false, 00:08:12.672 "supported_io_types": { 00:08:12.672 "read": true, 00:08:12.672 "write": true, 00:08:12.672 "unmap": true, 00:08:12.672 "flush": false, 00:08:12.672 "reset": true, 00:08:12.672 "nvme_admin": false, 00:08:12.672 "nvme_io": false, 00:08:12.672 "nvme_io_md": false, 00:08:12.672 "write_zeroes": true, 00:08:12.672 "zcopy": false, 00:08:12.672 "get_zone_info": false, 00:08:12.672 "zone_management": false, 00:08:12.672 "zone_append": false, 00:08:12.672 "compare": false, 00:08:12.672 "compare_and_write": false, 00:08:12.672 "abort": false, 00:08:12.672 "seek_hole": true, 00:08:12.672 "seek_data": true, 00:08:12.672 "copy": false, 00:08:12.672 "nvme_iov_md": false 00:08:12.672 }, 00:08:12.672 "driver_specific": { 00:08:12.672 "lvol": { 00:08:12.672 "lvol_store_uuid": "a88c81b6-a19b-482b-b42e-b301603d7847", 00:08:12.672 "base_bdev": "aio_bdev", 00:08:12.672 "thin_provision": false, 00:08:12.672 "num_allocated_clusters": 38, 00:08:12.672 "snapshot": false, 00:08:12.672 "clone": false, 00:08:12.672 "esnap_clone": false 00:08:12.672 } 00:08:12.672 } 00:08:12.672 } 00:08:12.672 ] 00:08:12.672 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:12.672 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:12.672 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:12.931 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:12.931 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:12.931 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:13.190 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:13.190 08:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.449 [2024-12-11 08:42:20.990331] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:13.449 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:13.708 request: 00:08:13.708 { 00:08:13.708 "uuid": "a88c81b6-a19b-482b-b42e-b301603d7847", 00:08:13.708 "method": "bdev_lvol_get_lvstores", 00:08:13.708 "req_id": 1 00:08:13.708 } 00:08:13.708 Got JSON-RPC error response 00:08:13.708 response: 00:08:13.708 { 00:08:13.708 "code": -19, 00:08:13.708 "message": "No such device" 00:08:13.708 } 00:08:13.708 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:13.708 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.708 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.708 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.708 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.967 aio_bdev 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ead54256-ea73-4095-a377-36574c333492 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ead54256-ea73-4095-a377-36574c333492 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.967 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:14.226 08:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ead54256-ea73-4095-a377-36574c333492 -t 2000 00:08:14.485 [ 00:08:14.485 { 00:08:14.485 "name": "ead54256-ea73-4095-a377-36574c333492", 00:08:14.485 "aliases": [ 00:08:14.485 "lvs/lvol" 00:08:14.485 ], 00:08:14.485 "product_name": "Logical Volume", 00:08:14.485 "block_size": 4096, 00:08:14.485 "num_blocks": 38912, 00:08:14.485 "uuid": "ead54256-ea73-4095-a377-36574c333492", 00:08:14.485 "assigned_rate_limits": { 00:08:14.485 "rw_ios_per_sec": 0, 00:08:14.485 "rw_mbytes_per_sec": 0, 00:08:14.485 "r_mbytes_per_sec": 0, 00:08:14.485 "w_mbytes_per_sec": 0 00:08:14.485 }, 00:08:14.485 "claimed": false, 00:08:14.485 "zoned": false, 00:08:14.485 "supported_io_types": { 00:08:14.485 "read": true, 00:08:14.485 "write": true, 00:08:14.485 "unmap": true, 00:08:14.485 "flush": false, 00:08:14.485 "reset": true, 00:08:14.485 "nvme_admin": false, 00:08:14.485 "nvme_io": false, 00:08:14.485 "nvme_io_md": false, 00:08:14.485 "write_zeroes": true, 00:08:14.485 "zcopy": false, 00:08:14.485 "get_zone_info": false, 00:08:14.485 "zone_management": false, 00:08:14.485 "zone_append": false, 00:08:14.485 "compare": false, 00:08:14.485 "compare_and_write": false, 00:08:14.485 "abort": false, 00:08:14.485 "seek_hole": true, 00:08:14.485 "seek_data": true, 00:08:14.485 "copy": false, 00:08:14.485 "nvme_iov_md": false 00:08:14.485 }, 00:08:14.485 "driver_specific": { 00:08:14.485 "lvol": { 00:08:14.485 "lvol_store_uuid": "a88c81b6-a19b-482b-b42e-b301603d7847", 00:08:14.485 "base_bdev": "aio_bdev", 00:08:14.485 "thin_provision": false, 00:08:14.485 "num_allocated_clusters": 38, 00:08:14.485 "snapshot": false, 00:08:14.485 "clone": false, 00:08:14.485 "esnap_clone": false 00:08:14.485 } 00:08:14.485 } 00:08:14.485 } 00:08:14.485 ] 00:08:14.485 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:14.485 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:14.485 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:14.744 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:14.744 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:14.744 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:15.003 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:15.003 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ead54256-ea73-4095-a377-36574c333492 00:08:15.262 08:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a88c81b6-a19b-482b-b42e-b301603d7847 00:08:15.521 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:15.779 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:16.038 ************************************ 00:08:16.038 END TEST lvs_grow_dirty 00:08:16.038 ************************************ 00:08:16.038 00:08:16.038 real 0m20.170s 00:08:16.038 user 0m42.057s 00:08:16.038 sys 0m8.957s 00:08:16.038 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.038 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.296 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:16.296 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:16.296 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:16.296 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:16.297 nvmf_trace.0 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.297 08:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.297 rmmod nvme_tcp 00:08:16.297 rmmod nvme_fabrics 00:08:16.297 rmmod nvme_keyring 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64613 ']' 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64613 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64613 ']' 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64613 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.297 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64613 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.555 killing process with pid 64613 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64613' 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64613 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64613 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:16.555 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:16.812 00:08:16.812 real 0m40.988s 00:08:16.812 user 1m5.053s 00:08:16.812 sys 0m11.984s 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.812 ************************************ 00:08:16.812 END TEST nvmf_lvs_grow 00:08:16.812 ************************************ 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.812 ************************************ 00:08:16.812 START TEST nvmf_bdev_io_wait 00:08:16.812 ************************************ 00:08:16.812 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:17.071 * Looking for test storage... 00:08:17.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.071 --rc genhtml_branch_coverage=1 00:08:17.071 --rc genhtml_function_coverage=1 00:08:17.071 --rc genhtml_legend=1 00:08:17.071 --rc geninfo_all_blocks=1 00:08:17.071 --rc geninfo_unexecuted_blocks=1 00:08:17.071 00:08:17.071 ' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.071 --rc genhtml_branch_coverage=1 00:08:17.071 --rc genhtml_function_coverage=1 00:08:17.071 --rc genhtml_legend=1 00:08:17.071 --rc geninfo_all_blocks=1 00:08:17.071 --rc geninfo_unexecuted_blocks=1 00:08:17.071 00:08:17.071 ' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.071 --rc genhtml_branch_coverage=1 00:08:17.071 --rc genhtml_function_coverage=1 00:08:17.071 --rc genhtml_legend=1 00:08:17.071 --rc geninfo_all_blocks=1 00:08:17.071 --rc geninfo_unexecuted_blocks=1 00:08:17.071 00:08:17.071 ' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.071 --rc genhtml_branch_coverage=1 00:08:17.071 --rc genhtml_function_coverage=1 00:08:17.071 --rc genhtml_legend=1 00:08:17.071 --rc geninfo_all_blocks=1 00:08:17.071 --rc geninfo_unexecuted_blocks=1 00:08:17.071 00:08:17.071 ' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.071 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:17.072 Cannot find device "nvmf_init_br" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:17.072 Cannot find device "nvmf_init_br2" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:17.072 Cannot find device "nvmf_tgt_br" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.072 Cannot find device "nvmf_tgt_br2" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:17.072 Cannot find device "nvmf_init_br" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:17.072 Cannot find device "nvmf_init_br2" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:17.072 Cannot find device "nvmf_tgt_br" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:17.072 Cannot find device "nvmf_tgt_br2" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:17.072 Cannot find device "nvmf_br" 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:17.072 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:17.330 Cannot find device "nvmf_init_if" 00:08:17.330 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:17.330 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:17.330 Cannot find device "nvmf_init_if2" 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:17.331 08:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:17.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:17.331 00:08:17.331 --- 10.0.0.3 ping statistics --- 00:08:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.331 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:17.331 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:17.331 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:08:17.331 00:08:17.331 --- 10.0.0.4 ping statistics --- 00:08:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.331 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:17.331 00:08:17.331 --- 10.0.0.1 ping statistics --- 00:08:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.331 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:17.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:08:17.331 00:08:17.331 --- 10.0.0.2 ping statistics --- 00:08:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.331 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.331 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64972 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64972 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64972 ']' 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.589 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.589 [2024-12-11 08:42:25.180377] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:17.589 [2024-12-11 08:42:25.180473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.589 [2024-12-11 08:42:25.328649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.589 [2024-12-11 08:42:25.358294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.589 [2024-12-11 08:42:25.358529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.589 [2024-12-11 08:42:25.358621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.589 [2024-12-11 08:42:25.358724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.589 [2024-12-11 08:42:25.358784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.589 [2024-12-11 08:42:25.359740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.589 [2024-12-11 08:42:25.359900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.589 [2024-12-11 08:42:25.360061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.589 [2024-12-11 08:42:25.360113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 [2024-12-11 08:42:25.502095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 [2024-12-11 08:42:25.517074] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 Malloc0 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.847 [2024-12-11 08:42:25.564062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64994 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64996 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.847 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.847 { 00:08:17.847 "params": { 00:08:17.848 "name": "Nvme$subsystem", 00:08:17.848 "trtype": "$TEST_TRANSPORT", 00:08:17.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "$NVMF_PORT", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.848 "hdgst": ${hdgst:-false}, 00:08:17.848 "ddgst": ${ddgst:-false} 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 } 00:08:17.848 EOF 00:08:17.848 )") 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64998 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.848 { 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme$subsystem", 00:08:17.848 "trtype": "$TEST_TRANSPORT", 00:08:17.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "$NVMF_PORT", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.848 "hdgst": ${hdgst:-false}, 00:08:17.848 "ddgst": ${ddgst:-false} 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 } 00:08:17.848 EOF 00:08:17.848 )") 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65001 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.848 { 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme$subsystem", 00:08:17.848 "trtype": "$TEST_TRANSPORT", 00:08:17.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "$NVMF_PORT", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.848 "hdgst": ${hdgst:-false}, 00:08:17.848 "ddgst": ${ddgst:-false} 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 } 00:08:17.848 EOF 00:08:17.848 )") 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.848 { 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme$subsystem", 00:08:17.848 "trtype": "$TEST_TRANSPORT", 00:08:17.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "$NVMF_PORT", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.848 "hdgst": ${hdgst:-false}, 00:08:17.848 "ddgst": ${ddgst:-false} 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 } 00:08:17.848 EOF 00:08:17.848 )") 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme1", 00:08:17.848 "trtype": "tcp", 00:08:17.848 "traddr": "10.0.0.3", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "4420", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.848 "hdgst": false, 00:08:17.848 "ddgst": false 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 }' 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme1", 00:08:17.848 "trtype": "tcp", 00:08:17.848 "traddr": "10.0.0.3", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "4420", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.848 "hdgst": false, 00:08:17.848 "ddgst": false 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 }' 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme1", 00:08:17.848 "trtype": "tcp", 00:08:17.848 "traddr": "10.0.0.3", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "4420", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.848 "hdgst": false, 00:08:17.848 "ddgst": false 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 }' 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:17.848 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.848 "params": { 00:08:17.848 "name": "Nvme1", 00:08:17.848 "trtype": "tcp", 00:08:17.848 "traddr": "10.0.0.3", 00:08:17.848 "adrfam": "ipv4", 00:08:17.848 "trsvcid": "4420", 00:08:17.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.848 "hdgst": false, 00:08:17.848 "ddgst": false 00:08:17.848 }, 00:08:17.848 "method": "bdev_nvme_attach_controller" 00:08:17.848 }' 00:08:18.106 [2024-12-11 08:42:25.629716] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:18.106 [2024-12-11 08:42:25.629831] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:18.106 [2024-12-11 08:42:25.632214] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:18.106 [2024-12-11 08:42:25.632289] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:18.106 08:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64994 00:08:18.106 [2024-12-11 08:42:25.657115] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:18.106 [2024-12-11 08:42:25.657225] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:18.106 [2024-12-11 08:42:25.659892] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:18.106 [2024-12-11 08:42:25.659970] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:18.106 [2024-12-11 08:42:25.826192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.106 [2024-12-11 08:42:25.857740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:18.106 [2024-12-11 08:42:25.865013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.106 [2024-12-11 08:42:25.871718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.365 [2024-12-11 08:42:25.897186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:18.365 [2024-12-11 08:42:25.910146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.365 [2024-12-11 08:42:25.910956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.365 [2024-12-11 08:42:25.941864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.365 [2024-12-11 08:42:25.955696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.365 Running I/O for 1 seconds... 00:08:18.365 [2024-12-11 08:42:25.994404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.365 Running I/O for 1 seconds... 00:08:18.365 [2024-12-11 08:42:26.033177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:18.365 [2024-12-11 08:42:26.050550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.365 Running I/O for 1 seconds... 00:08:18.623 Running I/O for 1 seconds... 00:08:19.556 163488.00 IOPS, 638.62 MiB/s 00:08:19.556 Latency(us) 00:08:19.556 [2024-12-11T08:42:27.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.556 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:19.556 Nvme1n1 : 1.00 163132.70 637.24 0.00 0.00 780.38 383.53 2144.81 00:08:19.556 [2024-12-11T08:42:27.330Z] =================================================================================================================== 00:08:19.556 [2024-12-11T08:42:27.330Z] Total : 163132.70 637.24 0.00 0.00 780.38 383.53 2144.81 00:08:19.556 11709.00 IOPS, 45.74 MiB/s 00:08:19.556 Latency(us) 00:08:19.556 [2024-12-11T08:42:27.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.556 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:19.556 Nvme1n1 : 1.01 11762.46 45.95 0.00 0.00 10842.74 6881.28 22878.02 00:08:19.556 [2024-12-11T08:42:27.330Z] =================================================================================================================== 00:08:19.556 [2024-12-11T08:42:27.330Z] Total : 11762.46 45.95 0.00 0.00 10842.74 6881.28 22878.02 00:08:19.556 7839.00 IOPS, 30.62 MiB/s 00:08:19.556 Latency(us) 00:08:19.556 [2024-12-11T08:42:27.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.556 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:19.556 Nvme1n1 : 1.01 7896.89 30.85 0.00 0.00 16121.84 6434.44 26095.24 00:08:19.556 [2024-12-11T08:42:27.330Z] =================================================================================================================== 00:08:19.556 [2024-12-11T08:42:27.330Z] Total : 7896.89 30.85 0.00 0.00 16121.84 6434.44 26095.24 00:08:19.556 7606.00 IOPS, 29.71 MiB/s 00:08:19.556 Latency(us) 00:08:19.556 [2024-12-11T08:42:27.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.556 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:19.556 Nvme1n1 : 1.01 7656.57 29.91 0.00 0.00 16624.50 7983.48 25261.15 00:08:19.556 [2024-12-11T08:42:27.330Z] =================================================================================================================== 00:08:19.556 [2024-12-11T08:42:27.330Z] Total : 7656.57 29.91 0.00 0.00 16624.50 7983.48 25261.15 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64996 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64998 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65001 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.556 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.816 rmmod nvme_tcp 00:08:19.816 rmmod nvme_fabrics 00:08:19.816 rmmod nvme_keyring 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64972 ']' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64972 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64972 ']' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64972 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64972 00:08:19.816 killing process with pid 64972 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64972' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64972 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64972 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:19.816 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:20.075 00:08:20.075 real 0m3.314s 00:08:20.075 user 0m13.140s 00:08:20.075 sys 0m2.044s 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.075 ************************************ 00:08:20.075 END TEST nvmf_bdev_io_wait 00:08:20.075 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.075 ************************************ 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.336 ************************************ 00:08:20.336 START TEST nvmf_queue_depth 00:08:20.336 ************************************ 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.336 * Looking for test storage... 00:08:20.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.336 08:42:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.336 --rc genhtml_branch_coverage=1 00:08:20.336 --rc genhtml_function_coverage=1 00:08:20.336 --rc genhtml_legend=1 00:08:20.336 --rc geninfo_all_blocks=1 00:08:20.336 --rc geninfo_unexecuted_blocks=1 00:08:20.336 00:08:20.336 ' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.336 --rc genhtml_branch_coverage=1 00:08:20.336 --rc genhtml_function_coverage=1 00:08:20.336 --rc genhtml_legend=1 00:08:20.336 --rc geninfo_all_blocks=1 00:08:20.336 --rc geninfo_unexecuted_blocks=1 00:08:20.336 00:08:20.336 ' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.336 --rc genhtml_branch_coverage=1 00:08:20.336 --rc genhtml_function_coverage=1 00:08:20.336 --rc genhtml_legend=1 00:08:20.336 --rc geninfo_all_blocks=1 00:08:20.336 --rc geninfo_unexecuted_blocks=1 00:08:20.336 00:08:20.336 ' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.336 --rc genhtml_branch_coverage=1 00:08:20.336 --rc genhtml_function_coverage=1 00:08:20.336 --rc genhtml_legend=1 00:08:20.336 --rc geninfo_all_blocks=1 00:08:20.336 --rc geninfo_unexecuted_blocks=1 00:08:20.336 00:08:20.336 ' 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:20.336 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:20.337 Cannot find device "nvmf_init_br" 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:20.337 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:20.596 Cannot find device "nvmf_init_br2" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:20.596 Cannot find device "nvmf_tgt_br" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.596 Cannot find device "nvmf_tgt_br2" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:20.596 Cannot find device "nvmf_init_br" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:20.596 Cannot find device "nvmf_init_br2" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:20.596 Cannot find device "nvmf_tgt_br" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:20.596 Cannot find device "nvmf_tgt_br2" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:20.596 Cannot find device "nvmf_br" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:20.596 Cannot find device "nvmf_init_if" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:20.596 Cannot find device "nvmf_init_if2" 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:20.596 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:20.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:20.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:20.855 00:08:20.855 --- 10.0.0.3 ping statistics --- 00:08:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.855 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:20.855 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:20.855 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:08:20.855 00:08:20.855 --- 10.0.0.4 ping statistics --- 00:08:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.855 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:20.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:20.855 00:08:20.855 --- 10.0.0.1 ping statistics --- 00:08:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.855 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:20.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:08:20.855 00:08:20.855 --- 10.0.0.2 ping statistics --- 00:08:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.855 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.855 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=65263 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 65263 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 65263 ']' 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.856 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.856 [2024-12-11 08:42:28.503748] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:20.856 [2024-12-11 08:42:28.504403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.115 [2024-12-11 08:42:28.657390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.115 [2024-12-11 08:42:28.687078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.115 [2024-12-11 08:42:28.687142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.115 [2024-12-11 08:42:28.687189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.115 [2024-12-11 08:42:28.687215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.115 [2024-12-11 08:42:28.687223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.115 [2024-12-11 08:42:28.687531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.115 [2024-12-11 08:42:28.714111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.115 [2024-12-11 08:42:28.820058] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.115 Malloc0 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.115 [2024-12-11 08:42:28.865666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=65282 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 65282 /var/tmp/bdevperf.sock 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 65282 ']' 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.115 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.374 [2024-12-11 08:42:28.920216] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:21.374 [2024-12-11 08:42:28.920339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65282 ] 00:08:21.374 [2024-12-11 08:42:29.070669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.374 [2024-12-11 08:42:29.110042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.374 [2024-12-11 08:42:29.143818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.311 08:42:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.311 08:42:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:22.311 08:42:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:22.311 08:42:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.311 08:42:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.311 NVMe0n1 00:08:22.311 08:42:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.311 08:42:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.570 Running I/O for 10 seconds... 00:08:24.444 7202.00 IOPS, 28.13 MiB/s [2024-12-11T08:42:33.154Z] 8013.00 IOPS, 31.30 MiB/s [2024-12-11T08:42:34.530Z] 8475.33 IOPS, 33.11 MiB/s [2024-12-11T08:42:35.466Z] 8720.25 IOPS, 34.06 MiB/s [2024-12-11T08:42:36.429Z] 8709.20 IOPS, 34.02 MiB/s [2024-12-11T08:42:37.366Z] 8720.67 IOPS, 34.07 MiB/s [2024-12-11T08:42:38.302Z] 8804.57 IOPS, 34.39 MiB/s [2024-12-11T08:42:39.238Z] 8899.88 IOPS, 34.77 MiB/s [2024-12-11T08:42:40.174Z] 8985.78 IOPS, 35.10 MiB/s [2024-12-11T08:42:40.433Z] 9012.60 IOPS, 35.21 MiB/s 00:08:32.659 Latency(us) 00:08:32.659 [2024-12-11T08:42:40.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.659 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:32.659 Verification LBA range: start 0x0 length 0x4000 00:08:32.659 NVMe0n1 : 10.10 9012.63 35.21 0.00 0.00 113033.87 26214.40 88652.33 00:08:32.659 [2024-12-11T08:42:40.433Z] =================================================================================================================== 00:08:32.659 [2024-12-11T08:42:40.433Z] Total : 9012.63 35.21 0.00 0.00 113033.87 26214.40 88652.33 00:08:32.659 { 00:08:32.659 "results": [ 00:08:32.659 { 00:08:32.659 "job": "NVMe0n1", 00:08:32.659 "core_mask": "0x1", 00:08:32.659 "workload": "verify", 00:08:32.659 "status": "finished", 00:08:32.659 "verify_range": { 00:08:32.659 "start": 0, 00:08:32.659 "length": 16384 00:08:32.659 }, 00:08:32.659 "queue_depth": 1024, 00:08:32.659 "io_size": 4096, 00:08:32.659 "runtime": 10.102159, 00:08:32.659 "iops": 9012.62789469063, 00:08:32.659 "mibps": 35.20557771363527, 00:08:32.659 "io_failed": 0, 00:08:32.659 "io_timeout": 0, 00:08:32.659 "avg_latency_us": 113033.87260853285, 00:08:32.659 "min_latency_us": 26214.4, 00:08:32.659 "max_latency_us": 88652.33454545455 00:08:32.659 } 00:08:32.659 ], 00:08:32.659 "core_count": 1 00:08:32.659 } 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 65282 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 65282 ']' 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 65282 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65282 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.659 killing process with pid 65282 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65282' 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 65282 00:08:32.659 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.659 00:08:32.659 Latency(us) 00:08:32.659 [2024-12-11T08:42:40.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.659 [2024-12-11T08:42:40.433Z] =================================================================================================================== 00:08:32.659 [2024-12-11T08:42:40.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.659 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 65282 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.918 rmmod nvme_tcp 00:08:32.918 rmmod nvme_fabrics 00:08:32.918 rmmod nvme_keyring 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:32.918 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 65263 ']' 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 65263 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 65263 ']' 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 65263 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65263 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:32.919 killing process with pid 65263 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65263' 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 65263 00:08:32.919 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 65263 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:33.178 00:08:33.178 real 0m13.056s 00:08:33.178 user 0m22.842s 00:08:33.178 sys 0m2.093s 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.178 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.178 ************************************ 00:08:33.178 END TEST nvmf_queue_depth 00:08:33.178 ************************************ 00:08:33.437 08:42:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:33.437 08:42:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.437 08:42:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.437 08:42:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 ************************************ 00:08:33.437 START TEST nvmf_target_multipath 00:08:33.437 ************************************ 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:33.437 * Looking for test storage... 00:08:33.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.437 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.438 --rc genhtml_branch_coverage=1 00:08:33.438 --rc genhtml_function_coverage=1 00:08:33.438 --rc genhtml_legend=1 00:08:33.438 --rc geninfo_all_blocks=1 00:08:33.438 --rc geninfo_unexecuted_blocks=1 00:08:33.438 00:08:33.438 ' 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.438 --rc genhtml_branch_coverage=1 00:08:33.438 --rc genhtml_function_coverage=1 00:08:33.438 --rc genhtml_legend=1 00:08:33.438 --rc geninfo_all_blocks=1 00:08:33.438 --rc geninfo_unexecuted_blocks=1 00:08:33.438 00:08:33.438 ' 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.438 --rc genhtml_branch_coverage=1 00:08:33.438 --rc genhtml_function_coverage=1 00:08:33.438 --rc genhtml_legend=1 00:08:33.438 --rc geninfo_all_blocks=1 00:08:33.438 --rc geninfo_unexecuted_blocks=1 00:08:33.438 00:08:33.438 ' 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.438 --rc genhtml_branch_coverage=1 00:08:33.438 --rc genhtml_function_coverage=1 00:08:33.438 --rc genhtml_legend=1 00:08:33.438 --rc geninfo_all_blocks=1 00:08:33.438 --rc geninfo_unexecuted_blocks=1 00:08:33.438 00:08:33.438 ' 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.438 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:33.698 Cannot find device "nvmf_init_br" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:33.698 Cannot find device "nvmf_init_br2" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:33.698 Cannot find device "nvmf_tgt_br" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.698 Cannot find device "nvmf_tgt_br2" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:33.698 Cannot find device "nvmf_init_br" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.698 Cannot find device "nvmf_init_br2" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.698 Cannot find device "nvmf_tgt_br" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.698 Cannot find device "nvmf_tgt_br2" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.698 Cannot find device "nvmf_br" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.698 Cannot find device "nvmf_init_if" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.698 Cannot find device "nvmf_init_if2" 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.698 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.699 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:33.958 00:08:33.958 --- 10.0.0.3 ping statistics --- 00:08:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.958 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:08:33.958 00:08:33.958 --- 10.0.0.4 ping statistics --- 00:08:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.958 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:33.958 00:08:33.958 --- 10.0.0.1 ping statistics --- 00:08:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.958 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:08:33.958 00:08:33.958 --- 10.0.0.2 ping statistics --- 00:08:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.958 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65664 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65664 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65664 ']' 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.958 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 [2024-12-11 08:42:41.664908] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:33.958 [2024-12-11 08:42:41.665000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.217 [2024-12-11 08:42:41.822059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.217 [2024-12-11 08:42:41.866771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.217 [2024-12-11 08:42:41.866839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.217 [2024-12-11 08:42:41.866854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.217 [2024-12-11 08:42:41.866864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.217 [2024-12-11 08:42:41.866872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.217 [2024-12-11 08:42:41.867795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.217 [2024-12-11 08:42:41.867939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.217 [2024-12-11 08:42:41.868063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.217 [2024-12-11 08:42:41.868182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.217 [2024-12-11 08:42:41.906699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.217 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.217 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:34.217 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.217 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.217 08:42:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.475 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.475 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:34.733 [2024-12-11 08:42:42.298608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.733 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:34.992 Malloc0 00:08:34.992 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:35.250 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.508 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:35.766 [2024-12-11 08:42:43.387679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:35.766 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:36.025 [2024-12-11 08:42:43.635960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:36.025 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:36.025 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:36.284 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:36.284 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:36.284 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:36.284 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:36.284 08:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:38.184 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65746 00:08:38.185 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:38.185 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:38.443 [global] 00:08:38.443 thread=1 00:08:38.443 invalidate=1 00:08:38.443 rw=randrw 00:08:38.443 time_based=1 00:08:38.443 runtime=6 00:08:38.443 ioengine=libaio 00:08:38.443 direct=1 00:08:38.443 bs=4096 00:08:38.443 iodepth=128 00:08:38.443 norandommap=0 00:08:38.443 numjobs=1 00:08:38.443 00:08:38.443 verify_dump=1 00:08:38.443 verify_backlog=512 00:08:38.443 verify_state_save=0 00:08:38.443 do_verify=1 00:08:38.443 verify=crc32c-intel 00:08:38.443 [job0] 00:08:38.443 filename=/dev/nvme0n1 00:08:38.443 Could not set queue depth (nvme0n1) 00:08:38.443 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:38.443 fio-3.35 00:08:38.443 Starting 1 thread 00:08:39.378 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:39.636 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:39.894 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:39.895 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:39.895 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.895 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:39.895 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:39.895 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:39.895 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:40.461 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:40.720 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65746 00:08:44.909 00:08:44.909 job0: (groupid=0, jobs=1): err= 0: pid=65767: Wed Dec 11 08:42:52 2024 00:08:44.909 read: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(240MiB/6006msec) 00:08:44.909 slat (usec): min=3, max=6456, avg=57.96, stdev=222.96 00:08:44.909 clat (usec): min=1324, max=14985, avg=8541.94, stdev=1428.65 00:08:44.909 lat (usec): min=1703, max=14997, avg=8599.90, stdev=1431.85 00:08:44.909 clat percentiles (usec): 00:08:44.909 | 1.00th=[ 4359], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 7832], 00:08:44.909 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:08:44.909 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[11731], 00:08:44.909 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14091], 99.95th=[14222], 00:08:44.909 | 99.99th=[14615] 00:08:44.909 bw ( KiB/s): min= 5136, max=27328, per=50.73%, avg=20740.36, stdev=6837.38, samples=11 00:08:44.909 iops : min= 1284, max= 6832, avg=5185.09, stdev=1709.35, samples=11 00:08:44.909 write: IOPS=6028, BW=23.5MiB/s (24.7MB/s)(124MiB/5264msec); 0 zone resets 00:08:44.909 slat (usec): min=15, max=1572, avg=66.51, stdev=163.18 00:08:44.909 clat (usec): min=1261, max=14915, avg=7486.90, stdev=1280.07 00:08:44.909 lat (usec): min=1312, max=14946, avg=7553.41, stdev=1284.16 00:08:44.909 clat percentiles (usec): 00:08:44.909 | 1.00th=[ 3392], 5.00th=[ 4490], 10.00th=[ 6128], 20.00th=[ 6980], 00:08:44.909 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:08:44.909 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8848], 00:08:44.909 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13042], 99.95th=[13566], 00:08:44.909 | 99.99th=[13960] 00:08:44.909 bw ( KiB/s): min= 5280, max=26744, per=86.31%, avg=20813.82, stdev=6659.78, samples=11 00:08:44.909 iops : min= 1320, max= 6686, avg=5203.45, stdev=1664.94, samples=11 00:08:44.909 lat (msec) : 2=0.01%, 4=1.44%, 10=92.80%, 20=5.75% 00:08:44.909 cpu : usr=5.68%, sys=20.92%, ctx=5420, majf=0, minf=139 00:08:44.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:44.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.909 issued rwts: total=61381,31733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.909 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.909 00:08:44.909 Run status group 0 (all jobs): 00:08:44.909 READ: bw=39.9MiB/s (41.9MB/s), 39.9MiB/s-39.9MiB/s (41.9MB/s-41.9MB/s), io=240MiB (251MB), run=6006-6006msec 00:08:44.909 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=124MiB (130MB), run=5264-5264msec 00:08:44.909 00:08:44.909 Disk stats (read/write): 00:08:44.909 nvme0n1: ios=60793/30905, merge=0/0, ticks=499077/217009, in_queue=716086, util=98.73% 00:08:44.909 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:44.909 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65853 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:45.167 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:45.167 [global] 00:08:45.167 thread=1 00:08:45.167 invalidate=1 00:08:45.167 rw=randrw 00:08:45.167 time_based=1 00:08:45.167 runtime=6 00:08:45.167 ioengine=libaio 00:08:45.167 direct=1 00:08:45.167 bs=4096 00:08:45.167 iodepth=128 00:08:45.167 norandommap=0 00:08:45.167 numjobs=1 00:08:45.167 00:08:45.167 verify_dump=1 00:08:45.167 verify_backlog=512 00:08:45.167 verify_state_save=0 00:08:45.167 do_verify=1 00:08:45.167 verify=crc32c-intel 00:08:45.167 [job0] 00:08:45.167 filename=/dev/nvme0n1 00:08:45.426 Could not set queue depth (nvme0n1) 00:08:45.426 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.426 fio-3.35 00:08:45.426 Starting 1 thread 00:08:46.362 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:46.621 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.879 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:46.880 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:46.880 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:46.880 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:47.138 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:47.397 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65853 00:08:51.584 00:08:51.584 job0: (groupid=0, jobs=1): err= 0: pid=65874: Wed Dec 11 08:42:59 2024 00:08:51.584 read: IOPS=11.5k, BW=45.1MiB/s (47.2MB/s)(270MiB/6003msec) 00:08:51.584 slat (usec): min=5, max=5606, avg=42.85, stdev=187.35 00:08:51.584 clat (usec): min=1533, max=15250, avg=7591.29, stdev=1961.55 00:08:51.584 lat (usec): min=1544, max=15276, avg=7634.14, stdev=1977.65 00:08:51.584 clat percentiles (usec): 00:08:51.585 | 1.00th=[ 3032], 5.00th=[ 3982], 10.00th=[ 4686], 20.00th=[ 5866], 00:08:51.585 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:08:51.585 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11076], 00:08:51.585 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13829], 99.95th=[14091], 00:08:51.585 | 99.99th=[15008] 00:08:51.585 bw ( KiB/s): min= 9240, max=41848, per=53.57%, avg=24716.18, stdev=9468.14, samples=11 00:08:51.585 iops : min= 2310, max=10462, avg=6179.00, stdev=2367.04, samples=11 00:08:51.585 write: IOPS=6880, BW=26.9MiB/s (28.2MB/s)(145MiB/5393msec); 0 zone resets 00:08:51.585 slat (usec): min=11, max=3755, avg=52.04, stdev=136.01 00:08:51.585 clat (usec): min=1225, max=14530, avg=6374.66, stdev=1852.14 00:08:51.585 lat (usec): min=1261, max=14554, avg=6426.71, stdev=1869.10 00:08:51.585 clat percentiles (usec): 00:08:51.585 | 1.00th=[ 2573], 5.00th=[ 3294], 10.00th=[ 3720], 20.00th=[ 4359], 00:08:51.585 | 30.00th=[ 5014], 40.00th=[ 6390], 50.00th=[ 7046], 60.00th=[ 7308], 00:08:51.585 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:08:51.585 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12387], 99.95th=[12780], 00:08:51.585 | 99.99th=[14091] 00:08:51.585 bw ( KiB/s): min= 9632, max=41176, per=89.74%, avg=24698.09, stdev=9356.38, samples=11 00:08:51.585 iops : min= 2408, max=10294, avg=6174.45, stdev=2339.10, samples=11 00:08:51.585 lat (msec) : 2=0.10%, 4=8.30%, 10=86.68%, 20=4.92% 00:08:51.585 cpu : usr=6.08%, sys=21.98%, ctx=5920, majf=0, minf=127 00:08:51.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:51.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.585 issued rwts: total=69246,37106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.585 00:08:51.585 Run status group 0 (all jobs): 00:08:51.585 READ: bw=45.1MiB/s (47.2MB/s), 45.1MiB/s-45.1MiB/s (47.2MB/s-47.2MB/s), io=270MiB (284MB), run=6003-6003msec 00:08:51.585 WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=145MiB (152MB), run=5393-5393msec 00:08:51.585 00:08:51.585 Disk stats (read/write): 00:08:51.585 nvme0n1: ios=68622/36299, merge=0/0, ticks=499858/216027, in_queue=715885, util=98.70% 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:51.585 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.152 rmmod nvme_tcp 00:08:52.152 rmmod nvme_fabrics 00:08:52.152 rmmod nvme_keyring 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65664 ']' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65664 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65664 ']' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65664 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65664 00:08:52.152 killing process with pid 65664 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65664' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65664 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65664 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:52.152 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:52.411 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.411 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:52.411 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:52.411 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:52.411 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:52.411 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:52.411 00:08:52.411 real 0m19.147s 00:08:52.411 user 1m10.806s 00:08:52.411 sys 0m9.882s 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.411 ************************************ 00:08:52.411 END TEST nvmf_target_multipath 00:08:52.411 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:52.411 ************************************ 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.671 ************************************ 00:08:52.671 START TEST nvmf_zcopy 00:08:52.671 ************************************ 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:52.671 * Looking for test storage... 00:08:52.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.671 --rc genhtml_branch_coverage=1 00:08:52.671 --rc genhtml_function_coverage=1 00:08:52.671 --rc genhtml_legend=1 00:08:52.671 --rc geninfo_all_blocks=1 00:08:52.671 --rc geninfo_unexecuted_blocks=1 00:08:52.671 00:08:52.671 ' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.671 --rc genhtml_branch_coverage=1 00:08:52.671 --rc genhtml_function_coverage=1 00:08:52.671 --rc genhtml_legend=1 00:08:52.671 --rc geninfo_all_blocks=1 00:08:52.671 --rc geninfo_unexecuted_blocks=1 00:08:52.671 00:08:52.671 ' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.671 --rc genhtml_branch_coverage=1 00:08:52.671 --rc genhtml_function_coverage=1 00:08:52.671 --rc genhtml_legend=1 00:08:52.671 --rc geninfo_all_blocks=1 00:08:52.671 --rc geninfo_unexecuted_blocks=1 00:08:52.671 00:08:52.671 ' 00:08:52.671 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.671 --rc genhtml_branch_coverage=1 00:08:52.671 --rc genhtml_function_coverage=1 00:08:52.671 --rc genhtml_legend=1 00:08:52.671 --rc geninfo_all_blocks=1 00:08:52.671 --rc geninfo_unexecuted_blocks=1 00:08:52.671 00:08:52.671 ' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:52.672 Cannot find device "nvmf_init_br" 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:52.672 Cannot find device "nvmf_init_br2" 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:52.672 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:52.931 Cannot find device "nvmf_tgt_br" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.931 Cannot find device "nvmf_tgt_br2" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:52.931 Cannot find device "nvmf_init_br" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:52.931 Cannot find device "nvmf_init_br2" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:52.931 Cannot find device "nvmf_tgt_br" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:52.931 Cannot find device "nvmf_tgt_br2" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:52.931 Cannot find device "nvmf_br" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:52.931 Cannot find device "nvmf_init_if" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:52.931 Cannot find device "nvmf_init_if2" 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:52.931 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.190 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.190 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:53.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:53.191 00:08:53.191 --- 10.0.0.3 ping statistics --- 00:08:53.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.191 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:53.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:53.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:08:53.191 00:08:53.191 --- 10.0.0.4 ping statistics --- 00:08:53.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.191 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:53.191 00:08:53.191 --- 10.0.0.1 ping statistics --- 00:08:53.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.191 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:53.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:53.191 00:08:53.191 --- 10.0.0.2 ping statistics --- 00:08:53.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.191 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=66176 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 66176 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 66176 ']' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.191 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.191 [2024-12-11 08:43:00.902187] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:53.191 [2024-12-11 08:43:00.902293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.449 [2024-12-11 08:43:01.049357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.449 [2024-12-11 08:43:01.086850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.449 [2024-12-11 08:43:01.086914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.449 [2024-12-11 08:43:01.086928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.449 [2024-12-11 08:43:01.086940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.449 [2024-12-11 08:43:01.086948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.449 [2024-12-11 08:43:01.087352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.449 [2024-12-11 08:43:01.120786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.449 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.449 [2024-12-11 08:43:01.219651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.708 [2024-12-11 08:43:01.235779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.708 malloc0 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.708 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.708 { 00:08:53.708 "params": { 00:08:53.708 "name": "Nvme$subsystem", 00:08:53.708 "trtype": "$TEST_TRANSPORT", 00:08:53.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.708 "adrfam": "ipv4", 00:08:53.708 "trsvcid": "$NVMF_PORT", 00:08:53.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.708 "hdgst": ${hdgst:-false}, 00:08:53.708 "ddgst": ${ddgst:-false} 00:08:53.708 }, 00:08:53.708 "method": "bdev_nvme_attach_controller" 00:08:53.708 } 00:08:53.709 EOF 00:08:53.709 )") 00:08:53.709 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:53.709 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:53.709 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:53.709 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.709 "params": { 00:08:53.709 "name": "Nvme1", 00:08:53.709 "trtype": "tcp", 00:08:53.709 "traddr": "10.0.0.3", 00:08:53.709 "adrfam": "ipv4", 00:08:53.709 "trsvcid": "4420", 00:08:53.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.709 "hdgst": false, 00:08:53.709 "ddgst": false 00:08:53.709 }, 00:08:53.709 "method": "bdev_nvme_attach_controller" 00:08:53.709 }' 00:08:53.709 [2024-12-11 08:43:01.324993] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:08:53.709 [2024-12-11 08:43:01.325090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66201 ] 00:08:53.709 [2024-12-11 08:43:01.478716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.968 [2024-12-11 08:43:01.517780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.968 [2024-12-11 08:43:01.559240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.968 Running I/O for 10 seconds... 00:08:56.282 6450.00 IOPS, 50.39 MiB/s [2024-12-11T08:43:04.993Z] 6472.50 IOPS, 50.57 MiB/s [2024-12-11T08:43:05.927Z] 6390.00 IOPS, 49.92 MiB/s [2024-12-11T08:43:06.862Z] 6289.50 IOPS, 49.14 MiB/s [2024-12-11T08:43:07.798Z] 6306.20 IOPS, 49.27 MiB/s [2024-12-11T08:43:08.781Z] 6310.83 IOPS, 49.30 MiB/s [2024-12-11T08:43:09.717Z] 6296.43 IOPS, 49.19 MiB/s [2024-12-11T08:43:11.094Z] 6291.38 IOPS, 49.15 MiB/s [2024-12-11T08:43:12.030Z] 6301.11 IOPS, 49.23 MiB/s [2024-12-11T08:43:12.030Z] 6328.50 IOPS, 49.44 MiB/s 00:09:04.256 Latency(us) 00:09:04.256 [2024-12-11T08:43:12.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:04.256 Verification LBA range: start 0x0 length 0x1000 00:09:04.256 Nvme1n1 : 10.01 6332.60 49.47 0.00 0.00 20149.84 2234.18 36223.53 00:09:04.256 [2024-12-11T08:43:12.030Z] =================================================================================================================== 00:09:04.256 [2024-12-11T08:43:12.030Z] Total : 6332.60 49.47 0.00 0.00 20149.84 2234.18 36223.53 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66313 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:04.256 { 00:09:04.256 "params": { 00:09:04.256 "name": "Nvme$subsystem", 00:09:04.256 "trtype": "$TEST_TRANSPORT", 00:09:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.256 "adrfam": "ipv4", 00:09:04.256 "trsvcid": "$NVMF_PORT", 00:09:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.256 "hdgst": ${hdgst:-false}, 00:09:04.256 "ddgst": ${ddgst:-false} 00:09:04.256 }, 00:09:04.256 "method": "bdev_nvme_attach_controller" 00:09:04.256 } 00:09:04.256 EOF 00:09:04.256 )") 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:04.256 [2024-12-11 08:43:11.818830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.818904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:04.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.256 "params": { 00:09:04.256 "name": "Nvme1", 00:09:04.256 "trtype": "tcp", 00:09:04.256 "traddr": "10.0.0.3", 00:09:04.256 "adrfam": "ipv4", 00:09:04.256 "trsvcid": "4420", 00:09:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.256 "hdgst": false, 00:09:04.256 "ddgst": false 00:09:04.256 }, 00:09:04.256 "method": "bdev_nvme_attach_controller" 00:09:04.256 }' 00:09:04.256 [2024-12-11 08:43:11.830789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.830832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.842789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.842832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.854794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.854837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.866821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.866866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.871726] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:09:04.256 [2024-12-11 08:43:11.871805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66313 ] 00:09:04.256 [2024-12-11 08:43:11.878818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.878845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.890820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.890845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.902845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.902871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.914844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.914867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.926842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.926884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.938858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.938900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.950857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.950898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.256 [2024-12-11 08:43:11.962860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.256 [2024-12-11 08:43:11.962901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.257 [2024-12-11 08:43:11.974864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.257 [2024-12-11 08:43:11.974905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.257 [2024-12-11 08:43:11.986880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.257 [2024-12-11 08:43:11.986921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.257 [2024-12-11 08:43:11.998891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.257 [2024-12-11 08:43:11.998933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.257 [2024-12-11 08:43:12.010890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.257 [2024-12-11 08:43:12.010933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.257 [2024-12-11 08:43:12.018892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.257 [2024-12-11 08:43:12.018933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.257 [2024-12-11 08:43:12.019019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.257 [2024-12-11 08:43:12.026927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.257 [2024-12-11 08:43:12.026984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.034930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.034979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.046913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.046958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.050959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.516 [2024-12-11 08:43:12.058906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.058949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.066932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.066985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.078944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.079002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.088339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.516 [2024-12-11 08:43:12.090931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.090977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.102940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.102995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.110928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.110971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.122983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.123044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.131029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.131080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.142985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.143033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.155025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.155073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.163025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.163071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.175036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.175081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.183058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.183108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 Running I/O for 5 seconds... 00:09:04.516 [2024-12-11 08:43:12.195044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.195089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.207371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.207424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.216785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.216834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.227936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.227985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.243803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.243852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.261477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.261527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.516 [2024-12-11 08:43:12.277296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.516 [2024-12-11 08:43:12.277348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.288736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.288789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.304249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.304288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.320377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.320429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.330061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.330115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.345444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.345494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.355614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.355663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.365931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.365980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.380293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.380345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.390148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.390209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.405532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.405583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.415987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.416037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.428323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.428373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.438445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.438495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.452848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.452898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.462718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.462768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.473290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.473339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.488050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.488100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.505189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.505239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.514977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.515026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.529765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.529815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.776 [2024-12-11 08:43:12.547579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.776 [2024-12-11 08:43:12.547629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.563918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.563968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.580415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.580465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.591575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.591640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.607832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.607880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.623695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.623744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.633256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.633319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.644834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.644885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.657097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.657172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.668451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.668500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.677969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.678018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.692766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.692816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.709618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.709667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.718950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.718998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.733032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.733081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.743968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.744018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.759907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.759957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.776312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.776348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.787062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.787114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.035 [2024-12-11 08:43:12.801774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.035 [2024-12-11 08:43:12.801823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.816736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.816788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.832990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.833041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.842543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.842594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.856894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.856945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.871519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.871586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.887865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.887914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.904800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.904849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.920867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.920917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.931332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.931367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.944094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.944176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.956332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.956369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.972950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.972983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.989056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.989107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:12.998742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:12.998792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:13.014478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:13.014546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:13.031474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:13.031513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:13.047457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:13.047528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-12-11 08:43:13.056851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-12-11 08:43:13.056901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.073022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.073074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.089932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.089982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.100011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.100061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.114535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.114601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.132334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.132384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.142589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.142640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.157781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.157833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.175967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.176016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.190780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.190832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 11780.00 IOPS, 92.03 MiB/s [2024-12-11T08:43:13.328Z] [2024-12-11 08:43:13.200459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.200495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.214709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.214760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.232581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.232631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.245917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.245967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.254744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.254793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.265175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.265224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.275902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.275950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.293461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.293528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.309127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.309203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.554 [2024-12-11 08:43:13.318438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.554 [2024-12-11 08:43:13.318487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.331880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.331931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.346353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.346403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.354933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.354983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.367096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.367195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.377058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.377106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.387336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.387389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.397630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.397678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.408053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.408101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.422698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.422748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.439021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.439071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.448744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.448793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.460109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.460188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.471451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.471506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.481479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.481529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.492461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.492499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.503985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.504037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.515343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.515381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.530932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.530984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.542217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.542271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.557713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.557781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.572874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.572924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.813 [2024-12-11 08:43:13.582660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.813 [2024-12-11 08:43:13.582712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.598223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.598284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.616616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.616693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.630604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.630681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.647338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.647412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.661432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.661492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.670477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.670558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.684492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.684572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.693186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.693248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.072 [2024-12-11 08:43:13.706820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.072 [2024-12-11 08:43:13.706895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.715938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.715989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.731349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.731416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.748987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.749037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.759700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.759749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.771281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.771320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.784558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.784607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.802115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.802196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.817259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.817297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.828188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.828246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.073 [2024-12-11 08:43:13.841415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.073 [2024-12-11 08:43:13.841453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.854456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.854496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.869765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.869814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.886208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.886280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.896317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.896371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.911252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.911290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.928775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.928824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.944476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.944526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.954022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.954072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.967960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.968043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.979115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.979200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:13.991773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:13.991832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.003657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.003709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.015802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.015835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.027386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.027421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.043901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.043942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.060388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.060441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.069915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.069965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.081064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.081117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.332 [2024-12-11 08:43:14.093054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.332 [2024-12-11 08:43:14.093106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.108657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.108709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.126227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.126279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.136336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.136386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.150352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.150419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.165918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.165992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.175302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.175357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.187818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.187886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 11735.00 IOPS, 91.68 MiB/s [2024-12-11T08:43:14.366Z] [2024-12-11 08:43:14.203258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.203323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.212673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.212746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.226826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.226895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.246108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.246204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.257175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.257257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.275446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.275538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.290451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.290540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.307642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.307716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.321134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.321211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.329926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.329974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.344639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.344688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-11 08:43:14.353108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-11 08:43:14.353187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.368444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.368508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.377012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.377059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.392755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.392802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.401592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.401639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.418498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.418563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.436116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.436195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.451278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.451317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.460102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.460180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.478066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.478115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.495141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.495242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.510052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.510101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.518815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.518864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.533695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.533744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.542999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.543048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.556262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.556311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.565430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.565479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.579435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.579486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.595001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.595049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.851 [2024-12-11 08:43:14.612357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.851 [2024-12-11 08:43:14.612406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.628874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.628924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.638162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.638223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.649729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.649778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.658957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.659005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.669313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.669362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.679268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.679319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.688839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.688887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.702852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.702900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.711576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.711624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.726050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.726099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.734986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.735035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.750800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.750849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.760348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.760397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.776974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.777024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.794984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.795033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.804944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.804993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.818964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.819014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.835285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.835338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.850951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.851001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.861712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.861762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.111 [2024-12-11 08:43:14.874159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.111 [2024-12-11 08:43:14.874221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.370 [2024-12-11 08:43:14.889200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.889263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.907082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.907167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.917306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.917356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.931341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.931393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.940697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.940746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.955249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.955302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.972602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.972653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.982924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.982974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:14.998104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:14.998181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.007774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.007823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.022636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.022686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.037685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.037735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.047449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.047487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.062745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.062792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.077338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.077392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.087231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.087289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.101533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.101589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.117253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.117311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.127157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.127266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.371 [2024-12-11 08:43:15.142697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.371 [2024-12-11 08:43:15.142800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.152792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.152854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.163785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.163846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.174006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.174056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.188778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.188853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 11918.00 IOPS, 93.11 MiB/s [2024-12-11T08:43:15.404Z] [2024-12-11 08:43:15.205461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.205558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.221587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.221662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.231391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.231451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.245024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.245097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.259886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.259965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.269122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.269218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.284754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.284830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.300485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.300555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.310160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.310211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.324998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.325046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.335423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.335463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.349296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.349343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.358211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.358258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.371997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.372043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.380576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.380623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.630 [2024-12-11 08:43:15.394852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.630 [2024-12-11 08:43:15.394899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.888 [2024-12-11 08:43:15.409904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.888 [2024-12-11 08:43:15.409953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.425255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.425303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.436639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.436686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.452682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.452731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.469035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.469083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.480047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.480094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.496005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.496053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.513523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.513572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.522970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.523017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.536653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.536701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.545206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.545262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.559108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.559204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.575146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.575237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.591871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.591909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.608672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.608720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.620103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.620167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.631309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.631346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.643561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.643625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.889 [2024-12-11 08:43:15.658224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.889 [2024-12-11 08:43:15.658299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.675973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.676023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.691060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.691109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.700901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.700951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.716359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.716397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.731768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.731816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.741261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.741310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.753973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.754023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.763777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.763825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.777585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.777635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.792939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.792988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.802801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.802851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.818870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.818922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.829614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.829663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.840670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.840719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.858958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.859009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.873696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.873747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.883720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.883770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.896760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.896803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.148 [2024-12-11 08:43:15.908282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.148 [2024-12-11 08:43:15.908320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.924178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.924264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.939331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.939369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.947899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.947949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.959745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.959794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.968956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.969004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.983016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.983064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:15.992688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:15.992736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.007630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.007662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.023188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.023225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.032257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.032309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.048257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.048328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.058259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.058312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.072399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.072454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.081474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.081543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.096391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.096467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.114528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.114598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.124629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.124691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.138288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.138339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.155133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.155251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.408 [2024-12-11 08:43:16.171105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.408 [2024-12-11 08:43:16.171228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.667 [2024-12-11 08:43:16.181898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.667 [2024-12-11 08:43:16.181945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.667 11968.50 IOPS, 93.50 MiB/s [2024-12-11T08:43:16.441Z] [2024-12-11 08:43:16.196865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.667 [2024-12-11 08:43:16.196917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.667 [2024-12-11 08:43:16.213769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.667 [2024-12-11 08:43:16.213825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.667 [2024-12-11 08:43:16.224068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.667 [2024-12-11 08:43:16.224135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.236848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.236898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.251841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.251891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.267790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.267841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.277458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.277525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.291643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.291691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.301441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.301492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.311815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.311865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.321670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.321719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.331630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.331680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.346093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.346166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.355405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.355458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.370634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.370683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.386612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.386662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.403235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.403287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.420634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.420685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.668 [2024-12-11 08:43:16.430467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.668 [2024-12-11 08:43:16.430518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.445389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.445440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.454580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.454629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.469376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.469426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.478287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.478336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.492732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.492782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.502946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.502997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.517906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.517955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.534964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.535017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.550845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.550924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.569041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.569115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.583042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.583112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.599409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.599506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.615982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.616061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.625489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.625576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.639631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.639707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.655569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.655661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.673722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.673804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.927 [2024-12-11 08:43:16.687966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.927 [2024-12-11 08:43:16.688046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.703970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.704045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.713433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.713492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.724107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.724189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.740713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.740782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.758898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.758949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.773439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.773478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.783899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.783952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.796523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.796560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.808378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.808416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.823979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.824029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.833790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.833839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.850163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.850225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.867286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.867324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.877329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.877380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.888842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.888891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.904540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.904605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.916109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.916199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.933015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.933070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.943983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.944034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.186 [2024-12-11 08:43:16.956449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.186 [2024-12-11 08:43:16.956493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:16.967387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:16.967427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:16.977658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:16.977708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:16.993003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:16.993053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.003725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.003774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.018942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.018994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.029561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.029610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.044896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.044946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.059984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.060033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.069090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.069167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.080665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.080715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.093722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.093772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.103151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.445 [2024-12-11 08:43:17.103272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.445 [2024-12-11 08:43:17.117046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.117096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 [2024-12-11 08:43:17.126296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.126345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 [2024-12-11 08:43:17.139925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.139974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 [2024-12-11 08:43:17.155519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.155583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 [2024-12-11 08:43:17.164491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.164540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 [2024-12-11 08:43:17.178535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.178584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 [2024-12-11 08:43:17.187867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.187932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 11956.00 IOPS, 93.41 MiB/s [2024-12-11T08:43:17.220Z] [2024-12-11 08:43:17.200842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.200892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.446 00:09:09.446 Latency(us) 00:09:09.446 [2024-12-11T08:43:17.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.446 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:09.446 Nvme1n1 : 5.01 11958.77 93.43 0.00 0.00 10689.03 3872.58 21090.68 00:09:09.446 [2024-12-11T08:43:17.220Z] =================================================================================================================== 00:09:09.446 [2024-12-11T08:43:17.220Z] Total : 11958.77 93.43 0.00 0.00 10689.03 3872.58 21090.68 00:09:09.446 [2024-12-11 08:43:17.205952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.446 [2024-12-11 08:43:17.206016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.217952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.218005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.225942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.225991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.237983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.238048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.245972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.246014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.257991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.258036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.269976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.270035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.281990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.282037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.289972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.290018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.301976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.302032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.309970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.310024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.321981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.322045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.333989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.334044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 [2024-12-11 08:43:17.341983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.705 [2024-12-11 08:43:17.342025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.705 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66313) - No such process 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66313 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.705 delay0 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.705 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:09.964 [2024-12-11 08:43:17.551535] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:16.600 Initializing NVMe Controllers 00:09:16.600 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.600 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.600 Initialization complete. Launching workers. 00:09:16.600 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:09:16.600 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 33 00:09:16.600 success 206, unsuccessful 147, failed 0 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.600 rmmod nvme_tcp 00:09:16.600 rmmod nvme_fabrics 00:09:16.600 rmmod nvme_keyring 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 66176 ']' 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 66176 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 66176 ']' 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 66176 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66176 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:16.600 killing process with pid 66176 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66176' 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 66176 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 66176 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:16.600 08:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:16.600 00:09:16.600 real 0m23.943s 00:09:16.600 user 0m39.044s 00:09:16.600 sys 0m6.643s 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.600 ************************************ 00:09:16.600 END TEST nvmf_zcopy 00:09:16.600 ************************************ 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.600 ************************************ 00:09:16.600 START TEST nvmf_nmic 00:09:16.600 ************************************ 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.600 * Looking for test storage... 00:09:16.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.600 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:16.601 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.860 --rc genhtml_branch_coverage=1 00:09:16.860 --rc genhtml_function_coverage=1 00:09:16.860 --rc genhtml_legend=1 00:09:16.860 --rc geninfo_all_blocks=1 00:09:16.860 --rc geninfo_unexecuted_blocks=1 00:09:16.860 00:09:16.860 ' 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.860 --rc genhtml_branch_coverage=1 00:09:16.860 --rc genhtml_function_coverage=1 00:09:16.860 --rc genhtml_legend=1 00:09:16.860 --rc geninfo_all_blocks=1 00:09:16.860 --rc geninfo_unexecuted_blocks=1 00:09:16.860 00:09:16.860 ' 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.860 --rc genhtml_branch_coverage=1 00:09:16.860 --rc genhtml_function_coverage=1 00:09:16.860 --rc genhtml_legend=1 00:09:16.860 --rc geninfo_all_blocks=1 00:09:16.860 --rc geninfo_unexecuted_blocks=1 00:09:16.860 00:09:16.860 ' 00:09:16.860 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.860 --rc genhtml_branch_coverage=1 00:09:16.860 --rc genhtml_function_coverage=1 00:09:16.860 --rc genhtml_legend=1 00:09:16.860 --rc geninfo_all_blocks=1 00:09:16.860 --rc geninfo_unexecuted_blocks=1 00:09:16.860 00:09:16.860 ' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:16.861 Cannot find device "nvmf_init_br" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:16.861 Cannot find device "nvmf_init_br2" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:16.861 Cannot find device "nvmf_tgt_br" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.861 Cannot find device "nvmf_tgt_br2" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:16.861 Cannot find device "nvmf_init_br" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:16.861 Cannot find device "nvmf_init_br2" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:16.861 Cannot find device "nvmf_tgt_br" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:16.861 Cannot find device "nvmf_tgt_br2" 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:16.861 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.861 Cannot find device "nvmf_br" 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.862 Cannot find device "nvmf_init_if" 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.862 Cannot find device "nvmf_init_if2" 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:16.862 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:17.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:17.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:17.121 00:09:17.121 --- 10.0.0.3 ping statistics --- 00:09:17.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.121 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:17.121 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:17.121 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:09:17.121 00:09:17.121 --- 10.0.0.4 ping statistics --- 00:09:17.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.121 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:17.121 00:09:17.121 --- 10.0.0.1 ping statistics --- 00:09:17.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.121 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:17.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:09:17.121 00:09:17.121 --- 10.0.0.2 ping statistics --- 00:09:17.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.121 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66695 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66695 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66695 ']' 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.121 08:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.121 [2024-12-11 08:43:24.852278] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:09:17.121 [2024-12-11 08:43:24.852387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.380 [2024-12-11 08:43:25.008269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.380 [2024-12-11 08:43:25.048212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.380 [2024-12-11 08:43:25.048285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.380 [2024-12-11 08:43:25.048298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.380 [2024-12-11 08:43:25.048308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.380 [2024-12-11 08:43:25.048317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.380 [2024-12-11 08:43:25.049204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.380 [2024-12-11 08:43:25.049348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.380 [2024-12-11 08:43:25.049480] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.380 [2024-12-11 08:43:25.049487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.380 [2024-12-11 08:43:25.082534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 [2024-12-11 08:43:25.842259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 Malloc0 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 [2024-12-11 08:43:25.912672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 test case1: single bdev can't be used in multiple subsystems 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 [2024-12-11 08:43:25.936517] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:18.315 [2024-12-11 08:43:25.936565] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:18.315 [2024-12-11 08:43:25.936576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.315 request: 00:09:18.315 { 00:09:18.315 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:18.315 "namespace": { 00:09:18.315 "bdev_name": "Malloc0", 00:09:18.315 "no_auto_visible": false, 00:09:18.315 "hide_metadata": false 00:09:18.315 }, 00:09:18.315 "method": "nvmf_subsystem_add_ns", 00:09:18.315 "req_id": 1 00:09:18.315 } 00:09:18.315 Got JSON-RPC error response 00:09:18.315 response: 00:09:18.315 { 00:09:18.315 "code": -32602, 00:09:18.315 "message": "Invalid parameters" 00:09:18.315 } 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:18.315 Adding namespace failed - expected result. 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:18.315 test case2: host connect to nvmf target in multiple paths 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 [2024-12-11 08:43:25.948610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.315 08:43:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:18.315 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:18.574 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.574 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:18.574 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.574 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:18.574 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:20.476 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.476 [global] 00:09:20.476 thread=1 00:09:20.476 invalidate=1 00:09:20.476 rw=write 00:09:20.476 time_based=1 00:09:20.476 runtime=1 00:09:20.476 ioengine=libaio 00:09:20.476 direct=1 00:09:20.476 bs=4096 00:09:20.476 iodepth=1 00:09:20.476 norandommap=0 00:09:20.476 numjobs=1 00:09:20.476 00:09:20.476 verify_dump=1 00:09:20.476 verify_backlog=512 00:09:20.476 verify_state_save=0 00:09:20.476 do_verify=1 00:09:20.476 verify=crc32c-intel 00:09:20.735 [job0] 00:09:20.735 filename=/dev/nvme0n1 00:09:20.735 Could not set queue depth (nvme0n1) 00:09:20.735 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.735 fio-3.35 00:09:20.735 Starting 1 thread 00:09:22.111 00:09:22.111 job0: (groupid=0, jobs=1): err= 0: pid=66787: Wed Dec 11 08:43:29 2024 00:09:22.111 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:09:22.111 slat (nsec): min=11126, max=67609, avg=14222.42, stdev=4680.24 00:09:22.111 clat (usec): min=141, max=507, avg=185.70, stdev=21.37 00:09:22.111 lat (usec): min=155, max=520, avg=199.93, stdev=21.75 00:09:22.111 clat percentiles (usec): 00:09:22.111 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:09:22.111 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:22.111 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 225], 00:09:22.111 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 255], 99.95th=[ 273], 00:09:22.111 | 99.99th=[ 510] 00:09:22.111 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:22.112 slat (usec): min=17, max=926, avg=23.42, stdev=18.17 00:09:22.112 clat (usec): min=86, max=2991, avg=117.58, stdev=76.25 00:09:22.112 lat (usec): min=106, max=3011, avg=141.00, stdev=83.79 00:09:22.112 clat percentiles (usec): 00:09:22.112 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 101], 00:09:22.112 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 114], 00:09:22.112 | 70.00th=[ 118], 80.00th=[ 126], 90.00th=[ 137], 95.00th=[ 149], 00:09:22.112 | 99.00th=[ 176], 99.50th=[ 200], 99.90th=[ 1237], 99.95th=[ 1860], 00:09:22.112 | 99.99th=[ 2999] 00:09:22.112 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:09:22.112 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:22.112 lat (usec) : 100=8.84%, 250=90.85%, 500=0.15%, 750=0.05%, 1000=0.02% 00:09:22.112 lat (msec) : 2=0.07%, 4=0.02% 00:09:22.112 cpu : usr=1.70%, sys=9.20%, ctx=5860, majf=0, minf=5 00:09:22.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.112 issued rwts: total=2787,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.112 00:09:22.112 Run status group 0 (all jobs): 00:09:22.112 READ: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=10.9MiB (11.4MB), run=1001-1001msec 00:09:22.112 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:22.112 00:09:22.112 Disk stats (read/write): 00:09:22.112 nvme0n1: ios=2610/2662, merge=0/0, ticks=527/346, in_queue=873, util=91.08% 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.112 rmmod nvme_tcp 00:09:22.112 rmmod nvme_fabrics 00:09:22.112 rmmod nvme_keyring 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66695 ']' 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66695 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66695 ']' 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66695 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66695 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.112 killing process with pid 66695 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66695' 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66695 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66695 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:22.112 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:22.370 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:22.370 00:09:22.370 real 0m5.938s 00:09:22.370 user 0m18.207s 00:09:22.370 sys 0m2.271s 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.370 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.370 ************************************ 00:09:22.370 END TEST nvmf_nmic 00:09:22.370 ************************************ 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.630 ************************************ 00:09:22.630 START TEST nvmf_fio_target 00:09:22.630 ************************************ 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.630 * Looking for test storage... 00:09:22.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.630 --rc genhtml_branch_coverage=1 00:09:22.630 --rc genhtml_function_coverage=1 00:09:22.630 --rc genhtml_legend=1 00:09:22.630 --rc geninfo_all_blocks=1 00:09:22.630 --rc geninfo_unexecuted_blocks=1 00:09:22.630 00:09:22.630 ' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.630 --rc genhtml_branch_coverage=1 00:09:22.630 --rc genhtml_function_coverage=1 00:09:22.630 --rc genhtml_legend=1 00:09:22.630 --rc geninfo_all_blocks=1 00:09:22.630 --rc geninfo_unexecuted_blocks=1 00:09:22.630 00:09:22.630 ' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.630 --rc genhtml_branch_coverage=1 00:09:22.630 --rc genhtml_function_coverage=1 00:09:22.630 --rc genhtml_legend=1 00:09:22.630 --rc geninfo_all_blocks=1 00:09:22.630 --rc geninfo_unexecuted_blocks=1 00:09:22.630 00:09:22.630 ' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.630 --rc genhtml_branch_coverage=1 00:09:22.630 --rc genhtml_function_coverage=1 00:09:22.630 --rc genhtml_legend=1 00:09:22.630 --rc geninfo_all_blocks=1 00:09:22.630 --rc geninfo_unexecuted_blocks=1 00:09:22.630 00:09:22.630 ' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.630 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.631 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.631 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:22.890 Cannot find device "nvmf_init_br" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:22.890 Cannot find device "nvmf_init_br2" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:22.890 Cannot find device "nvmf_tgt_br" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.890 Cannot find device "nvmf_tgt_br2" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:22.890 Cannot find device "nvmf_init_br" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:22.890 Cannot find device "nvmf_init_br2" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:22.890 Cannot find device "nvmf_tgt_br" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:22.890 Cannot find device "nvmf_tgt_br2" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:22.890 Cannot find device "nvmf_br" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:22.890 Cannot find device "nvmf_init_if" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:22.890 Cannot find device "nvmf_init_if2" 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:22.890 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:23.149 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:23.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:23.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:23.150 00:09:23.150 --- 10.0.0.3 ping statistics --- 00:09:23.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.150 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:23.150 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:23.150 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:09:23.150 00:09:23.150 --- 10.0.0.4 ping statistics --- 00:09:23.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.150 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:23.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:09:23.150 00:09:23.150 --- 10.0.0.1 ping statistics --- 00:09:23.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.150 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:23.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:23.150 00:09:23.150 --- 10.0.0.2 ping statistics --- 00:09:23.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.150 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=67014 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 67014 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 67014 ']' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.150 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.150 [2024-12-11 08:43:30.899340] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:09:23.150 [2024-12-11 08:43:30.899426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.408 [2024-12-11 08:43:31.041658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.408 [2024-12-11 08:43:31.070862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.408 [2024-12-11 08:43:31.070935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.408 [2024-12-11 08:43:31.070945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.408 [2024-12-11 08:43:31.070953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.408 [2024-12-11 08:43:31.070959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.408 [2024-12-11 08:43:31.071818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.408 [2024-12-11 08:43:31.072289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.408 [2024-12-11 08:43:31.072887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.408 [2024-12-11 08:43:31.072898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.408 [2024-12-11 08:43:31.101463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.342 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.600 [2024-12-11 08:43:32.129447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.600 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.858 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:24.858 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.116 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:25.116 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.374 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:25.374 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.633 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:25.633 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:25.891 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.149 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:26.149 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.407 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:26.408 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.666 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:26.666 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:26.924 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.181 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.181 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.444 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.444 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.714 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:27.986 [2024-12-11 08:43:35.586403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:27.986 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:28.244 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:28.502 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:28.760 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:28.760 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:28.760 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.760 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:28.760 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:28.760 08:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:30.660 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.660 [global] 00:09:30.660 thread=1 00:09:30.660 invalidate=1 00:09:30.660 rw=write 00:09:30.660 time_based=1 00:09:30.660 runtime=1 00:09:30.660 ioengine=libaio 00:09:30.660 direct=1 00:09:30.660 bs=4096 00:09:30.660 iodepth=1 00:09:30.660 norandommap=0 00:09:30.660 numjobs=1 00:09:30.660 00:09:30.660 verify_dump=1 00:09:30.660 verify_backlog=512 00:09:30.660 verify_state_save=0 00:09:30.660 do_verify=1 00:09:30.660 verify=crc32c-intel 00:09:30.660 [job0] 00:09:30.660 filename=/dev/nvme0n1 00:09:30.660 [job1] 00:09:30.660 filename=/dev/nvme0n2 00:09:30.660 [job2] 00:09:30.660 filename=/dev/nvme0n3 00:09:30.660 [job3] 00:09:30.660 filename=/dev/nvme0n4 00:09:30.660 Could not set queue depth (nvme0n1) 00:09:30.660 Could not set queue depth (nvme0n2) 00:09:30.660 Could not set queue depth (nvme0n3) 00:09:30.660 Could not set queue depth (nvme0n4) 00:09:30.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.918 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.918 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.918 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.918 fio-3.35 00:09:30.918 Starting 4 threads 00:09:32.291 00:09:32.291 job0: (groupid=0, jobs=1): err= 0: pid=67204: Wed Dec 11 08:43:39 2024 00:09:32.291 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:09:32.291 slat (nsec): min=11238, max=65179, avg=13919.63, stdev=3551.63 00:09:32.291 clat (usec): min=135, max=253, avg=165.63, stdev=14.32 00:09:32.291 lat (usec): min=146, max=268, avg=179.55, stdev=15.06 00:09:32.291 clat percentiles (usec): 00:09:32.291 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:32.291 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:32.291 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:09:32.291 | 99.00th=[ 210], 99.50th=[ 212], 99.90th=[ 223], 99.95th=[ 237], 00:09:32.291 | 99.99th=[ 253] 00:09:32.291 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:32.291 slat (nsec): min=13632, max=59199, avg=20403.81, stdev=4006.11 00:09:32.291 clat (usec): min=90, max=423, avg=124.18, stdev=17.97 00:09:32.291 lat (usec): min=109, max=442, avg=144.58, stdev=18.81 00:09:32.291 clat percentiles (usec): 00:09:32.291 | 1.00th=[ 97], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 112], 00:09:32.291 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 123], 60.00th=[ 126], 00:09:32.291 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 155], 00:09:32.291 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 217], 99.95th=[ 219], 00:09:32.291 | 99.99th=[ 424] 00:09:32.291 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.291 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.291 lat (usec) : 100=1.19%, 250=98.77%, 500=0.03% 00:09:32.291 cpu : usr=2.70%, sys=7.90%, ctx=6112, majf=0, minf=11 00:09:32.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.291 issued rwts: total=3039,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.292 job1: (groupid=0, jobs=1): err= 0: pid=67205: Wed Dec 11 08:43:39 2024 00:09:32.292 read: IOPS=1918, BW=7672KiB/s (7856kB/s)(7680KiB/1001msec) 00:09:32.292 slat (nsec): min=8789, max=47782, avg=12089.90, stdev=3092.51 00:09:32.292 clat (usec): min=137, max=964, avg=269.77, stdev=30.80 00:09:32.292 lat (usec): min=150, max=977, avg=281.86, stdev=31.54 00:09:32.292 clat percentiles (usec): 00:09:32.292 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:09:32.292 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:09:32.292 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:09:32.292 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 963], 00:09:32.292 | 99.99th=[ 963] 00:09:32.292 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:32.292 slat (nsec): min=11608, max=62164, avg=19942.53, stdev=4882.11 00:09:32.292 clat (usec): min=125, max=1637, avg=201.11, stdev=45.84 00:09:32.292 lat (usec): min=146, max=1652, avg=221.05, stdev=45.92 00:09:32.292 clat percentiles (usec): 00:09:32.292 | 1.00th=[ 137], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 182], 00:09:32.292 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:09:32.292 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 237], 00:09:32.292 | 99.00th=[ 277], 99.50th=[ 408], 99.90th=[ 627], 99.95th=[ 709], 00:09:32.292 | 99.99th=[ 1631] 00:09:32.292 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:32.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:32.292 lat (usec) : 250=58.39%, 500=41.46%, 750=0.10%, 1000=0.03% 00:09:32.292 lat (msec) : 2=0.03% 00:09:32.292 cpu : usr=1.40%, sys=5.40%, ctx=3968, majf=0, minf=13 00:09:32.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.292 issued rwts: total=1920,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.292 job2: (groupid=0, jobs=1): err= 0: pid=67206: Wed Dec 11 08:43:39 2024 00:09:32.292 read: IOPS=1926, BW=7704KiB/s (7889kB/s)(7712KiB/1001msec) 00:09:32.292 slat (nsec): min=11643, max=49944, avg=14559.09, stdev=3052.27 00:09:32.292 clat (usec): min=219, max=513, avg=268.45, stdev=25.84 00:09:32.292 lat (usec): min=235, max=528, avg=283.01, stdev=26.01 00:09:32.292 clat percentiles (usec): 00:09:32.292 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:09:32.292 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:09:32.292 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 322], 00:09:32.292 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 404], 99.95th=[ 515], 00:09:32.292 | 99.99th=[ 515] 00:09:32.292 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:32.292 slat (usec): min=10, max=145, avg=18.73, stdev= 6.71 00:09:32.292 clat (usec): min=71, max=1717, avg=200.01, stdev=46.54 00:09:32.292 lat (usec): min=139, max=1737, avg=218.75, stdev=46.58 00:09:32.292 clat percentiles (usec): 00:09:32.292 | 1.00th=[ 131], 5.00th=[ 145], 10.00th=[ 165], 20.00th=[ 184], 00:09:32.292 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:09:32.292 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 237], 00:09:32.292 | 99.00th=[ 277], 99.50th=[ 363], 99.90th=[ 611], 99.95th=[ 660], 00:09:32.292 | 99.99th=[ 1713] 00:09:32.292 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:32.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:32.292 lat (usec) : 100=0.03%, 250=60.76%, 500=39.11%, 750=0.08% 00:09:32.292 lat (msec) : 2=0.03% 00:09:32.292 cpu : usr=2.30%, sys=4.80%, ctx=3978, majf=0, minf=9 00:09:32.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.292 issued rwts: total=1928,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.292 job3: (groupid=0, jobs=1): err= 0: pid=67207: Wed Dec 11 08:43:39 2024 00:09:32.292 read: IOPS=2623, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1001msec) 00:09:32.292 slat (nsec): min=11462, max=44355, avg=13786.63, stdev=2956.74 00:09:32.292 clat (usec): min=147, max=1989, avg=180.17, stdev=39.75 00:09:32.292 lat (usec): min=159, max=2004, avg=193.95, stdev=39.96 00:09:32.292 clat percentiles (usec): 00:09:32.292 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:09:32.292 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:09:32.292 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:09:32.292 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 420], 99.95th=[ 611], 00:09:32.292 | 99.99th=[ 1991] 00:09:32.292 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:32.292 slat (nsec): min=14187, max=82852, avg=20867.49, stdev=4519.09 00:09:32.292 clat (usec): min=103, max=192, avg=135.97, stdev=13.34 00:09:32.292 lat (usec): min=121, max=274, avg=156.83, stdev=14.35 00:09:32.292 clat percentiles (usec): 00:09:32.292 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 126], 00:09:32.292 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:32.292 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:09:32.292 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 188], 00:09:32.292 | 99.99th=[ 192] 00:09:32.292 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.292 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.292 lat (usec) : 250=99.93%, 500=0.04%, 750=0.02% 00:09:32.292 lat (msec) : 2=0.02% 00:09:32.292 cpu : usr=2.10%, sys=7.90%, ctx=5699, majf=0, minf=5 00:09:32.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.292 issued rwts: total=2626,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.292 00:09:32.292 Run status group 0 (all jobs): 00:09:32.292 READ: bw=37.1MiB/s (38.9MB/s), 7672KiB/s-11.9MiB/s (7856kB/s-12.4MB/s), io=37.2MiB (39.0MB), run=1001-1001msec 00:09:32.292 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:09:32.292 00:09:32.292 Disk stats (read/write): 00:09:32.292 nvme0n1: ios=2610/2650, merge=0/0, ticks=456/351, in_queue=807, util=87.07% 00:09:32.292 nvme0n2: ios=1574/1846, merge=0/0, ticks=418/366, in_queue=784, util=87.60% 00:09:32.292 nvme0n3: ios=1536/1896, merge=0/0, ticks=421/359, in_queue=780, util=88.98% 00:09:32.292 nvme0n4: ios=2286/2560, merge=0/0, ticks=423/372, in_queue=795, util=89.64% 00:09:32.292 08:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:32.292 [global] 00:09:32.292 thread=1 00:09:32.292 invalidate=1 00:09:32.292 rw=randwrite 00:09:32.292 time_based=1 00:09:32.292 runtime=1 00:09:32.292 ioengine=libaio 00:09:32.292 direct=1 00:09:32.292 bs=4096 00:09:32.292 iodepth=1 00:09:32.292 norandommap=0 00:09:32.292 numjobs=1 00:09:32.292 00:09:32.292 verify_dump=1 00:09:32.292 verify_backlog=512 00:09:32.292 verify_state_save=0 00:09:32.292 do_verify=1 00:09:32.292 verify=crc32c-intel 00:09:32.292 [job0] 00:09:32.292 filename=/dev/nvme0n1 00:09:32.292 [job1] 00:09:32.292 filename=/dev/nvme0n2 00:09:32.292 [job2] 00:09:32.292 filename=/dev/nvme0n3 00:09:32.292 [job3] 00:09:32.292 filename=/dev/nvme0n4 00:09:32.292 Could not set queue depth (nvme0n1) 00:09:32.292 Could not set queue depth (nvme0n2) 00:09:32.292 Could not set queue depth (nvme0n3) 00:09:32.292 Could not set queue depth (nvme0n4) 00:09:32.292 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.292 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.292 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.292 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.292 fio-3.35 00:09:32.292 Starting 4 threads 00:09:33.664 00:09:33.665 job0: (groupid=0, jobs=1): err= 0: pid=67265: Wed Dec 11 08:43:41 2024 00:09:33.665 read: IOPS=1981, BW=7924KiB/s (8114kB/s)(7932KiB/1001msec) 00:09:33.665 slat (nsec): min=11436, max=41558, avg=13197.52, stdev=2715.75 00:09:33.665 clat (usec): min=150, max=1171, avg=268.29, stdev=34.24 00:09:33.665 lat (usec): min=163, max=1196, avg=281.49, stdev=34.89 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 251], 00:09:33.665 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:09:33.665 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:09:33.665 | 99.00th=[ 375], 99.50th=[ 412], 99.90th=[ 799], 99.95th=[ 1172], 00:09:33.665 | 99.99th=[ 1172] 00:09:33.665 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:33.665 slat (usec): min=17, max=113, avg=19.89, stdev= 3.99 00:09:33.665 clat (usec): min=103, max=748, avg=192.51, stdev=31.67 00:09:33.665 lat (usec): min=125, max=767, avg=212.40, stdev=32.92 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 123], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:09:33.665 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:09:33.665 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 221], 00:09:33.665 | 99.00th=[ 310], 99.50th=[ 367], 99.90th=[ 506], 99.95th=[ 570], 00:09:33.665 | 99.99th=[ 750] 00:09:33.665 bw ( KiB/s): min= 8192, max= 8192, per=20.75%, avg=8192.00, stdev= 0.00, samples=1 00:09:33.665 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:33.665 lat (usec) : 250=58.70%, 500=41.18%, 750=0.07%, 1000=0.02% 00:09:33.665 lat (msec) : 2=0.02% 00:09:33.665 cpu : usr=1.30%, sys=5.40%, ctx=4034, majf=0, minf=7 00:09:33.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 issued rwts: total=1983,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.665 job1: (groupid=0, jobs=1): err= 0: pid=67266: Wed Dec 11 08:43:41 2024 00:09:33.665 read: IOPS=1987, BW=7948KiB/s (8139kB/s)(7956KiB/1001msec) 00:09:33.665 slat (nsec): min=11207, max=46351, avg=13651.91, stdev=3259.54 00:09:33.665 clat (usec): min=144, max=2018, avg=268.18, stdev=48.73 00:09:33.665 lat (usec): min=159, max=2043, avg=281.83, stdev=48.93 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 249], 00:09:33.665 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:09:33.665 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:09:33.665 | 99.00th=[ 359], 99.50th=[ 420], 99.90th=[ 766], 99.95th=[ 2024], 00:09:33.665 | 99.99th=[ 2024] 00:09:33.665 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:33.665 slat (usec): min=17, max=137, avg=21.53, stdev= 9.57 00:09:33.665 clat (usec): min=59, max=1084, avg=189.51, stdev=31.49 00:09:33.665 lat (usec): min=114, max=1153, avg=211.03, stdev=33.64 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 116], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 176], 00:09:33.665 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:09:33.665 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:09:33.665 | 99.00th=[ 277], 99.50th=[ 314], 99.90th=[ 379], 99.95th=[ 404], 00:09:33.665 | 99.99th=[ 1090] 00:09:33.665 bw ( KiB/s): min= 8192, max= 8192, per=20.75%, avg=8192.00, stdev= 0.00, samples=1 00:09:33.665 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:33.665 lat (usec) : 100=0.07%, 250=60.17%, 500=39.63%, 750=0.05%, 1000=0.02% 00:09:33.665 lat (msec) : 2=0.02%, 4=0.02% 00:09:33.665 cpu : usr=0.90%, sys=6.20%, ctx=4054, majf=0, minf=9 00:09:33.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 issued rwts: total=1989,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.665 job2: (groupid=0, jobs=1): err= 0: pid=67267: Wed Dec 11 08:43:41 2024 00:09:33.665 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:33.665 slat (nsec): min=8120, max=43819, avg=11818.20, stdev=2026.06 00:09:33.665 clat (usec): min=149, max=3010, avg=186.89, stdev=83.86 00:09:33.665 lat (usec): min=161, max=3031, avg=198.70, stdev=83.93 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:09:33.665 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:33.665 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 251], 95.00th=[ 265], 00:09:33.665 | 99.00th=[ 285], 99.50th=[ 383], 99.90th=[ 1614], 99.95th=[ 1991], 00:09:33.665 | 99.99th=[ 2999] 00:09:33.665 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:09:33.665 slat (nsec): min=10615, max=77658, avg=18977.89, stdev=4368.82 00:09:33.665 clat (usec): min=105, max=1501, avg=141.60, stdev=34.84 00:09:33.665 lat (usec): min=124, max=1520, avg=160.58, stdev=34.58 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:09:33.665 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:33.665 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 176], 95.00th=[ 196], 00:09:33.665 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 260], 99.95th=[ 412], 00:09:33.665 | 99.99th=[ 1500] 00:09:33.665 bw ( KiB/s): min=12288, max=12288, per=31.12%, avg=12288.00, stdev= 0.00, samples=1 00:09:33.665 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:33.665 lat (usec) : 250=95.27%, 500=4.57%, 750=0.05%, 1000=0.02% 00:09:33.665 lat (msec) : 2=0.07%, 4=0.02% 00:09:33.665 cpu : usr=2.50%, sys=6.60%, ctx=5563, majf=0, minf=21 00:09:33.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 issued rwts: total=2560,3003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.665 job3: (groupid=0, jobs=1): err= 0: pid=67268: Wed Dec 11 08:43:41 2024 00:09:33.665 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:33.665 slat (nsec): min=11442, max=42611, avg=13491.61, stdev=2334.87 00:09:33.665 clat (usec): min=145, max=3583, avg=193.04, stdev=98.11 00:09:33.665 lat (usec): min=157, max=3598, avg=206.53, stdev=98.27 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:33.665 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:09:33.665 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 255], 95.00th=[ 273], 00:09:33.665 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 1663], 99.95th=[ 2737], 00:09:33.665 | 99.99th=[ 3589] 00:09:33.665 write: IOPS=2780, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec); 0 zone resets 00:09:33.665 slat (nsec): min=15162, max=71032, avg=21561.66, stdev=4764.34 00:09:33.665 clat (usec): min=104, max=5980, avg=144.35, stdev=135.09 00:09:33.665 lat (usec): min=123, max=5999, avg=165.91, stdev=135.36 00:09:33.665 clat percentiles (usec): 00:09:33.665 | 1.00th=[ 111], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 126], 00:09:33.665 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:33.665 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 165], 95.00th=[ 188], 00:09:33.665 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 1893], 99.95th=[ 3425], 00:09:33.665 | 99.99th=[ 5997] 00:09:33.665 bw ( KiB/s): min=12288, max=12288, per=31.12%, avg=12288.00, stdev= 0.00, samples=1 00:09:33.665 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:33.665 lat (usec) : 250=94.14%, 500=5.69%, 750=0.04% 00:09:33.665 lat (msec) : 2=0.06%, 4=0.06%, 10=0.02% 00:09:33.665 cpu : usr=2.90%, sys=6.90%, ctx=5343, majf=0, minf=9 00:09:33.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.665 issued rwts: total=2560,2783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.665 00:09:33.665 Run status group 0 (all jobs): 00:09:33.665 READ: bw=35.5MiB/s (37.2MB/s), 7924KiB/s-9.99MiB/s (8114kB/s-10.5MB/s), io=35.5MiB (37.2MB), run=1001-1001msec 00:09:33.665 WRITE: bw=38.6MiB/s (40.4MB/s), 8184KiB/s-11.7MiB/s (8380kB/s-12.3MB/s), io=38.6MiB (40.5MB), run=1001-1001msec 00:09:33.665 00:09:33.665 Disk stats (read/write): 00:09:33.665 nvme0n1: ios=1586/1992, merge=0/0, ticks=448/406, in_queue=854, util=87.98% 00:09:33.665 nvme0n2: ios=1585/1988, merge=0/0, ticks=446/400, in_queue=846, util=88.69% 00:09:33.665 nvme0n3: ios=2388/2560, merge=0/0, ticks=438/362, in_queue=800, util=89.28% 00:09:33.665 nvme0n4: ios=2260/2560, merge=0/0, ticks=418/372, in_queue=790, util=89.32% 00:09:33.665 08:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:33.665 [global] 00:09:33.665 thread=1 00:09:33.665 invalidate=1 00:09:33.665 rw=write 00:09:33.665 time_based=1 00:09:33.665 runtime=1 00:09:33.665 ioengine=libaio 00:09:33.665 direct=1 00:09:33.665 bs=4096 00:09:33.665 iodepth=128 00:09:33.665 norandommap=0 00:09:33.665 numjobs=1 00:09:33.665 00:09:33.665 verify_dump=1 00:09:33.665 verify_backlog=512 00:09:33.665 verify_state_save=0 00:09:33.665 do_verify=1 00:09:33.665 verify=crc32c-intel 00:09:33.665 [job0] 00:09:33.665 filename=/dev/nvme0n1 00:09:33.665 [job1] 00:09:33.665 filename=/dev/nvme0n2 00:09:33.665 [job2] 00:09:33.665 filename=/dev/nvme0n3 00:09:33.665 [job3] 00:09:33.665 filename=/dev/nvme0n4 00:09:33.665 Could not set queue depth (nvme0n1) 00:09:33.665 Could not set queue depth (nvme0n2) 00:09:33.665 Could not set queue depth (nvme0n3) 00:09:33.665 Could not set queue depth (nvme0n4) 00:09:33.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.665 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.665 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.665 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.665 fio-3.35 00:09:33.665 Starting 4 threads 00:09:35.039 00:09:35.040 job0: (groupid=0, jobs=1): err= 0: pid=67323: Wed Dec 11 08:43:42 2024 00:09:35.040 read: IOPS=4764, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1001msec) 00:09:35.040 slat (usec): min=5, max=3310, avg=99.65, stdev=472.32 00:09:35.040 clat (usec): min=339, max=14643, avg=13163.18, stdev=1289.75 00:09:35.040 lat (usec): min=2676, max=14655, avg=13262.83, stdev=1203.90 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[ 6128], 5.00th=[11469], 10.00th=[12518], 20.00th=[12780], 00:09:35.040 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:09:35.040 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14091], 95.00th=[14353], 00:09:35.040 | 99.00th=[14484], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:09:35.040 | 99.99th=[14615] 00:09:35.040 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:35.040 slat (usec): min=10, max=2986, avg=94.80, stdev=407.46 00:09:35.040 clat (usec): min=9059, max=13679, avg=12429.32, stdev=588.96 00:09:35.040 lat (usec): min=10650, max=13844, avg=12524.13, stdev=425.66 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:09:35.040 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:09:35.040 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13042], 95.00th=[13173], 00:09:35.040 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:09:35.040 | 99.99th=[13698] 00:09:35.040 bw ( KiB/s): min=20480, max=20521, per=26.93%, avg=20500.50, stdev=28.99, samples=2 00:09:35.040 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:35.040 lat (usec) : 500=0.01% 00:09:35.040 lat (msec) : 4=0.32%, 10=1.33%, 20=98.33% 00:09:35.040 cpu : usr=4.60%, sys=13.70%, ctx=311, majf=0, minf=9 00:09:35.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:35.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.040 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.040 job1: (groupid=0, jobs=1): err= 0: pid=67324: Wed Dec 11 08:43:42 2024 00:09:35.040 read: IOPS=4681, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1005msec) 00:09:35.040 slat (usec): min=5, max=3282, avg=100.74, stdev=484.26 00:09:35.040 clat (usec): min=1696, max=14596, avg=13220.58, stdev=1095.08 00:09:35.040 lat (usec): min=4427, max=14619, avg=13321.33, stdev=987.87 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[ 8094], 5.00th=[11863], 10.00th=[12649], 20.00th=[12911], 00:09:35.040 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:09:35.040 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:09:35.040 | 99.00th=[14484], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:09:35.040 | 99.99th=[14615] 00:09:35.040 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:35.040 slat (usec): min=8, max=5005, avg=95.90, stdev=421.32 00:09:35.040 clat (usec): min=9446, max=15631, avg=12622.70, stdev=737.56 00:09:35.040 lat (usec): min=10526, max=15651, avg=12718.60, stdev=603.85 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[10028], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:09:35.040 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:09:35.040 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:09:35.040 | 99.00th=[15401], 99.50th=[15664], 99.90th=[15664], 99.95th=[15664], 00:09:35.040 | 99.99th=[15664] 00:09:35.040 bw ( KiB/s): min=20232, max=20521, per=26.76%, avg=20376.50, stdev=204.35, samples=2 00:09:35.040 iops : min= 5058, max= 5130, avg=5094.00, stdev=50.91, samples=2 00:09:35.040 lat (msec) : 2=0.01%, 10=1.13%, 20=98.86% 00:09:35.040 cpu : usr=3.69%, sys=13.35%, ctx=330, majf=0, minf=7 00:09:35.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:35.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.040 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.040 job2: (groupid=0, jobs=1): err= 0: pid=67325: Wed Dec 11 08:43:42 2024 00:09:35.040 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:35.040 slat (usec): min=8, max=3695, avg=114.40, stdev=547.34 00:09:35.040 clat (usec): min=11020, max=16511, avg=15211.05, stdev=800.62 00:09:35.040 lat (usec): min=13760, max=16531, avg=15325.45, stdev=596.05 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[11994], 5.00th=[13960], 10.00th=[14353], 20.00th=[14746], 00:09:35.040 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:09:35.040 | 70.00th=[15664], 80.00th=[15795], 90.00th=[15926], 95.00th=[16057], 00:09:35.040 | 99.00th=[16450], 99.50th=[16450], 99.90th=[16450], 99.95th=[16450], 00:09:35.040 | 99.99th=[16450] 00:09:35.040 write: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1003msec); 0 zone resets 00:09:35.040 slat (usec): min=8, max=6825, avg=111.61, stdev=496.32 00:09:35.040 clat (usec): min=2359, max=18959, avg=14495.22, stdev=1563.79 00:09:35.040 lat (usec): min=2370, max=18984, avg=14606.83, stdev=1488.84 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[ 6128], 5.00th=[12911], 10.00th=[13435], 20.00th=[14091], 00:09:35.040 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:09:35.040 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[15664], 00:09:35.040 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:09:35.040 | 99.99th=[19006] 00:09:35.040 bw ( KiB/s): min=17016, max=17488, per=22.66%, avg=17252.00, stdev=333.75, samples=2 00:09:35.040 iops : min= 4254, max= 4372, avg=4313.00, stdev=83.44, samples=2 00:09:35.040 lat (msec) : 4=0.28%, 10=0.70%, 20=99.02% 00:09:35.040 cpu : usr=4.79%, sys=11.88%, ctx=267, majf=0, minf=9 00:09:35.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:35.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.040 issued rwts: total=4096,4440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.040 job3: (groupid=0, jobs=1): err= 0: pid=67326: Wed Dec 11 08:43:42 2024 00:09:35.040 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:35.040 slat (usec): min=5, max=6978, avg=114.08, stdev=552.59 00:09:35.040 clat (usec): min=10804, max=18396, avg=15131.45, stdev=968.45 00:09:35.040 lat (usec): min=13366, max=18405, avg=15245.54, stdev=809.86 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[11863], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:09:35.040 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:09:35.040 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16057], 95.00th=[16188], 00:09:35.040 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:09:35.040 | 99.99th=[18482] 00:09:35.040 write: IOPS=4435, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1003msec); 0 zone resets 00:09:35.040 slat (usec): min=10, max=7573, avg=112.32, stdev=499.75 00:09:35.040 clat (usec): min=137, max=20948, avg=14522.25, stdev=1710.01 00:09:35.040 lat (usec): min=2680, max=20999, avg=14634.57, stdev=1637.68 00:09:35.040 clat percentiles (usec): 00:09:35.040 | 1.00th=[ 6783], 5.00th=[12518], 10.00th=[13435], 20.00th=[13960], 00:09:35.040 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:09:35.040 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[15795], 00:09:35.040 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:09:35.040 | 99.99th=[20841] 00:09:35.040 bw ( KiB/s): min=17152, max=17450, per=22.72%, avg=17301.00, stdev=210.72, samples=2 00:09:35.040 iops : min= 4288, max= 4362, avg=4325.00, stdev=52.33, samples=2 00:09:35.040 lat (usec) : 250=0.01% 00:09:35.040 lat (msec) : 4=0.37%, 10=0.37%, 20=98.51%, 50=0.73% 00:09:35.040 cpu : usr=5.59%, sys=11.08%, ctx=270, majf=0, minf=12 00:09:35.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:35.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.040 issued rwts: total=4096,4449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.040 00:09:35.040 Run status group 0 (all jobs): 00:09:35.040 READ: bw=68.7MiB/s (72.0MB/s), 16.0MiB/s-18.6MiB/s (16.7MB/s-19.5MB/s), io=69.0MiB (72.4MB), run=1001-1005msec 00:09:35.040 WRITE: bw=74.4MiB/s (78.0MB/s), 17.3MiB/s-20.0MiB/s (18.1MB/s-20.9MB/s), io=74.7MiB (78.4MB), run=1001-1005msec 00:09:35.040 00:09:35.040 Disk stats (read/write): 00:09:35.040 nvme0n1: ios=4145/4384, merge=0/0, ticks=12334/11812, in_queue=24146, util=88.04% 00:09:35.040 nvme0n2: ios=4096/4320, merge=0/0, ticks=12319/11767, in_queue=24086, util=87.98% 00:09:35.040 nvme0n3: ios=3584/3776, merge=0/0, ticks=12320/12044, in_queue=24364, util=89.36% 00:09:35.040 nvme0n4: ios=3584/3776, merge=0/0, ticks=12038/11893, in_queue=23931, util=88.89% 00:09:35.040 08:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:35.040 [global] 00:09:35.040 thread=1 00:09:35.040 invalidate=1 00:09:35.040 rw=randwrite 00:09:35.040 time_based=1 00:09:35.040 runtime=1 00:09:35.040 ioengine=libaio 00:09:35.040 direct=1 00:09:35.040 bs=4096 00:09:35.040 iodepth=128 00:09:35.040 norandommap=0 00:09:35.040 numjobs=1 00:09:35.040 00:09:35.040 verify_dump=1 00:09:35.040 verify_backlog=512 00:09:35.040 verify_state_save=0 00:09:35.040 do_verify=1 00:09:35.040 verify=crc32c-intel 00:09:35.040 [job0] 00:09:35.040 filename=/dev/nvme0n1 00:09:35.040 [job1] 00:09:35.040 filename=/dev/nvme0n2 00:09:35.040 [job2] 00:09:35.040 filename=/dev/nvme0n3 00:09:35.040 [job3] 00:09:35.040 filename=/dev/nvme0n4 00:09:35.040 Could not set queue depth (nvme0n1) 00:09:35.040 Could not set queue depth (nvme0n2) 00:09:35.040 Could not set queue depth (nvme0n3) 00:09:35.040 Could not set queue depth (nvme0n4) 00:09:35.040 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.040 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.040 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.041 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.041 fio-3.35 00:09:35.041 Starting 4 threads 00:09:36.418 00:09:36.418 job0: (groupid=0, jobs=1): err= 0: pid=67385: Wed Dec 11 08:43:43 2024 00:09:36.418 read: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:36.418 slat (usec): min=8, max=5947, avg=85.53, stdev=511.04 00:09:36.418 clat (usec): min=1558, max=18607, avg=11796.24, stdev=1381.95 00:09:36.418 lat (usec): min=4280, max=22085, avg=11881.77, stdev=1397.77 00:09:36.418 clat percentiles (usec): 00:09:36.419 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11338], 00:09:36.419 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:09:36.419 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12649], 95.00th=[12780], 00:09:36.419 | 99.00th=[17957], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:09:36.419 | 99.99th=[18482] 00:09:36.419 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:36.419 slat (usec): min=5, max=8465, avg=84.43, stdev=495.42 00:09:36.419 clat (usec): min=5902, max=15266, avg=10776.92, stdev=1047.67 00:09:36.419 lat (usec): min=6505, max=15281, avg=10861.34, stdev=953.55 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[ 7046], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:09:36.419 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:09:36.419 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:09:36.419 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15270], 99.95th=[15270], 00:09:36.419 | 99.99th=[15270] 00:09:36.419 bw ( KiB/s): min=21000, max=24056, per=34.71%, avg=22528.00, stdev=2160.92, samples=2 00:09:36.419 iops : min= 5250, max= 6014, avg=5632.00, stdev=540.23, samples=2 00:09:36.419 lat (msec) : 2=0.01%, 10=9.59%, 20=90.41% 00:09:36.419 cpu : usr=5.39%, sys=14.19%, ctx=244, majf=0, minf=9 00:09:36.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.419 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.419 job1: (groupid=0, jobs=1): err= 0: pid=67386: Wed Dec 11 08:43:43 2024 00:09:36.419 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:36.419 slat (usec): min=9, max=15615, avg=207.57, stdev=1226.82 00:09:36.419 clat (usec): min=17313, max=42551, avg=27638.25, stdev=5631.75 00:09:36.419 lat (usec): min=17338, max=42585, avg=27845.82, stdev=5662.11 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[18744], 5.00th=[21627], 10.00th=[23725], 20.00th=[24249], 00:09:36.419 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:09:36.419 | 70.00th=[26084], 80.00th=[35390], 90.00th=[38011], 95.00th=[38536], 00:09:36.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:36.419 | 99.99th=[42730] 00:09:36.419 write: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1005msec); 0 zone resets 00:09:36.419 slat (usec): min=7, max=11251, avg=151.91, stdev=980.59 00:09:36.419 clat (usec): min=1708, max=38682, avg=19292.22, stdev=4891.00 00:09:36.419 lat (usec): min=6769, max=38704, avg=19444.13, stdev=4835.45 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[ 7308], 5.00th=[11994], 10.00th=[13173], 20.00th=[14091], 00:09:36.419 | 30.00th=[15401], 40.00th=[17171], 50.00th=[20841], 60.00th=[22676], 00:09:36.419 | 70.00th=[23200], 80.00th=[23462], 90.00th=[24511], 95.00th=[25035], 00:09:36.419 | 99.00th=[27657], 99.50th=[27919], 99.90th=[36963], 99.95th=[38011], 00:09:36.419 | 99.99th=[38536] 00:09:36.419 bw ( KiB/s): min=10136, max=12312, per=17.29%, avg=11224.00, stdev=1538.66, samples=2 00:09:36.419 iops : min= 2534, max= 3078, avg=2806.00, stdev=384.67, samples=2 00:09:36.419 lat (msec) : 2=0.02%, 10=1.31%, 20=25.22%, 50=73.45% 00:09:36.419 cpu : usr=2.89%, sys=8.17%, ctx=161, majf=0, minf=9 00:09:36.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.419 issued rwts: total=2560,2931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.419 job2: (groupid=0, jobs=1): err= 0: pid=67387: Wed Dec 11 08:43:43 2024 00:09:36.419 read: IOPS=4779, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1005msec) 00:09:36.419 slat (usec): min=5, max=6427, avg=96.41, stdev=604.60 00:09:36.419 clat (usec): min=1349, max=21575, avg=13425.86, stdev=1657.04 00:09:36.419 lat (usec): min=6154, max=25652, avg=13522.27, stdev=1676.41 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[ 7177], 5.00th=[10159], 10.00th=[12649], 20.00th=[12911], 00:09:36.419 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:09:36.419 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:09:36.419 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:09:36.419 | 99.99th=[21627] 00:09:36.419 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:36.419 slat (usec): min=11, max=8926, avg=97.59, stdev=574.57 00:09:36.419 clat (usec): min=6595, max=17001, avg=12285.63, stdev=1111.20 00:09:36.419 lat (usec): min=8757, max=17026, avg=12383.23, stdev=985.77 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[ 8160], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:09:36.419 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:09:36.419 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:09:36.419 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:09:36.419 | 99.99th=[16909] 00:09:36.419 bw ( KiB/s): min=20480, max=20521, per=31.59%, avg=20500.50, stdev=28.99, samples=2 00:09:36.419 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:36.419 lat (msec) : 2=0.01%, 10=3.74%, 20=95.61%, 50=0.64% 00:09:36.419 cpu : usr=4.48%, sys=13.84%, ctx=211, majf=0, minf=7 00:09:36.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.419 issued rwts: total=4803,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.419 job3: (groupid=0, jobs=1): err= 0: pid=67388: Wed Dec 11 08:43:43 2024 00:09:36.419 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:09:36.419 slat (usec): min=6, max=12924, avg=182.39, stdev=876.14 00:09:36.419 clat (usec): min=18518, max=47113, avg=26271.18, stdev=3845.81 00:09:36.419 lat (usec): min=18552, max=47128, avg=26453.57, stdev=3845.04 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[20055], 5.00th=[22414], 10.00th=[23462], 20.00th=[23987], 00:09:36.419 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:09:36.419 | 70.00th=[25822], 80.00th=[28181], 90.00th=[31589], 95.00th=[34341], 00:09:36.419 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:09:36.419 | 99.99th=[46924] 00:09:36.419 write: IOPS=2636, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1007msec); 0 zone resets 00:09:36.419 slat (usec): min=13, max=12417, avg=192.25, stdev=1177.17 00:09:36.419 clat (usec): min=5766, max=37014, avg=22191.52, stdev=3765.97 00:09:36.419 lat (usec): min=7261, max=37067, avg=22383.77, stdev=3876.46 00:09:36.419 clat percentiles (usec): 00:09:36.419 | 1.00th=[13173], 5.00th=[15533], 10.00th=[17171], 20.00th=[18744], 00:09:36.419 | 30.00th=[21103], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:09:36.419 | 70.00th=[23725], 80.00th=[24249], 90.00th=[26608], 95.00th=[28443], 00:09:36.419 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[35390], 00:09:36.419 | 99.99th=[36963] 00:09:36.419 bw ( KiB/s): min= 8248, max=12232, per=15.78%, avg=10240.00, stdev=2817.11, samples=2 00:09:36.419 iops : min= 2062, max= 3058, avg=2560.00, stdev=704.28, samples=2 00:09:36.419 lat (msec) : 10=0.17%, 20=14.06%, 50=85.77% 00:09:36.419 cpu : usr=2.39%, sys=8.05%, ctx=167, majf=0, minf=17 00:09:36.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:36.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.419 issued rwts: total=2560,2655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.419 00:09:36.419 Run status group 0 (all jobs): 00:09:36.419 READ: bw=60.3MiB/s (63.2MB/s), 9.93MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=60.7MiB (63.7MB), run=1002-1007msec 00:09:36.419 WRITE: bw=63.4MiB/s (66.5MB/s), 10.3MiB/s-22.0MiB/s (10.8MB/s-23.0MB/s), io=63.8MiB (66.9MB), run=1002-1007msec 00:09:36.419 00:09:36.419 Disk stats (read/write): 00:09:36.419 nvme0n1: ios=4658/5007, merge=0/0, ticks=51371/49836, in_queue=101207, util=88.08% 00:09:36.419 nvme0n2: ios=2097/2548, merge=0/0, ticks=54500/45446, in_queue=99946, util=88.26% 00:09:36.419 nvme0n3: ios=4096/4352, merge=0/0, ticks=52054/49143, in_queue=101197, util=89.18% 00:09:36.419 nvme0n4: ios=2048/2412, merge=0/0, ticks=25125/24821, in_queue=49946, util=88.80% 00:09:36.419 08:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:36.419 08:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67401 00:09:36.419 08:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:36.419 08:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:36.419 [global] 00:09:36.419 thread=1 00:09:36.419 invalidate=1 00:09:36.419 rw=read 00:09:36.419 time_based=1 00:09:36.419 runtime=10 00:09:36.419 ioengine=libaio 00:09:36.419 direct=1 00:09:36.419 bs=4096 00:09:36.419 iodepth=1 00:09:36.419 norandommap=1 00:09:36.419 numjobs=1 00:09:36.419 00:09:36.419 [job0] 00:09:36.419 filename=/dev/nvme0n1 00:09:36.419 [job1] 00:09:36.419 filename=/dev/nvme0n2 00:09:36.419 [job2] 00:09:36.419 filename=/dev/nvme0n3 00:09:36.419 [job3] 00:09:36.419 filename=/dev/nvme0n4 00:09:36.419 Could not set queue depth (nvme0n1) 00:09:36.419 Could not set queue depth (nvme0n2) 00:09:36.419 Could not set queue depth (nvme0n3) 00:09:36.419 Could not set queue depth (nvme0n4) 00:09:36.419 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.419 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.419 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.419 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.419 fio-3.35 00:09:36.419 Starting 4 threads 00:09:39.700 08:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:39.700 fio: pid=67448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.700 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33304576, buflen=4096 00:09:39.700 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:39.957 fio: pid=67447, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.958 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=71467008, buflen=4096 00:09:39.958 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.958 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:40.215 fio: pid=67443, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.215 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40419328, buflen=4096 00:09:40.215 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.215 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:40.474 fio: pid=67446, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.474 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48439296, buflen=4096 00:09:40.474 00:09:40.474 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67443: Wed Dec 11 08:43:48 2024 00:09:40.474 read: IOPS=2706, BW=10.6MiB/s (11.1MB/s)(38.5MiB/3647msec) 00:09:40.474 slat (usec): min=12, max=13606, avg=26.37, stdev=221.87 00:09:40.474 clat (usec): min=125, max=3074, avg=340.71, stdev=113.12 00:09:40.474 lat (usec): min=139, max=13873, avg=367.07, stdev=250.37 00:09:40.474 clat percentiles (usec): 00:09:40.474 | 1.00th=[ 147], 5.00th=[ 188], 10.00th=[ 225], 20.00th=[ 255], 00:09:40.474 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:09:40.474 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 482], 95.00th=[ 519], 00:09:40.474 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 1369], 99.95th=[ 2507], 00:09:40.474 | 99.99th=[ 3064] 00:09:40.474 bw ( KiB/s): min= 8080, max=14779, per=22.12%, avg=10656.43, stdev=2098.99, samples=7 00:09:40.474 iops : min= 2020, max= 3694, avg=2664.00, stdev=524.50, samples=7 00:09:40.474 lat (usec) : 250=18.17%, 500=74.77%, 750=6.75%, 1000=0.11% 00:09:40.474 lat (msec) : 2=0.14%, 4=0.05% 00:09:40.474 cpu : usr=1.15%, sys=5.51%, ctx=9873, majf=0, minf=1 00:09:40.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.474 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.474 issued rwts: total=9869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.474 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67446: Wed Dec 11 08:43:48 2024 00:09:40.474 read: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(46.2MiB/3926msec) 00:09:40.474 slat (usec): min=7, max=11777, avg=20.62, stdev=201.20 00:09:40.474 clat (usec): min=123, max=73668, avg=309.74, stdev=681.34 00:09:40.474 lat (usec): min=135, max=73694, avg=330.36, stdev=710.47 00:09:40.474 clat percentiles (usec): 00:09:40.474 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 159], 20.00th=[ 231], 00:09:40.474 | 30.00th=[ 277], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 343], 00:09:40.474 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 379], 00:09:40.474 | 99.00th=[ 424], 99.50th=[ 553], 99.90th=[ 1369], 99.95th=[ 1811], 00:09:40.474 | 99.99th=[ 3064] 00:09:40.474 bw ( KiB/s): min=10872, max=13311, per=23.57%, avg=11351.86, stdev=879.85, samples=7 00:09:40.474 iops : min= 2718, max= 3327, avg=2837.86, stdev=219.68, samples=7 00:09:40.474 lat (usec) : 250=25.31%, 500=74.04%, 750=0.37%, 1000=0.08% 00:09:40.474 lat (msec) : 2=0.16%, 4=0.02%, 100=0.01% 00:09:40.474 cpu : usr=1.25%, sys=4.28%, ctx=11850, majf=0, minf=2 00:09:40.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.474 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.474 issued rwts: total=11827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.474 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67447: Wed Dec 11 08:43:48 2024 00:09:40.474 read: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(68.2MiB/3299msec) 00:09:40.474 slat (usec): min=11, max=9513, avg=14.84, stdev=98.18 00:09:40.474 clat (usec): min=81, max=3205, avg=173.09, stdev=30.76 00:09:40.474 lat (usec): min=151, max=9693, avg=187.92, stdev=103.04 00:09:40.474 clat percentiles (usec): 00:09:40.474 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:09:40.474 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:40.474 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 204], 00:09:40.474 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 269], 99.95th=[ 502], 00:09:40.474 | 99.99th=[ 1237] 00:09:40.474 bw ( KiB/s): min=20336, max=21616, per=44.19%, avg=21282.67, stdev=487.06, samples=6 00:09:40.474 iops : min= 5084, max= 5404, avg=5320.67, stdev=121.76, samples=6 00:09:40.474 lat (usec) : 100=0.01%, 250=99.82%, 500=0.11%, 750=0.03% 00:09:40.474 lat (msec) : 2=0.01%, 4=0.01% 00:09:40.474 cpu : usr=1.30%, sys=6.43%, ctx=17455, majf=0, minf=2 00:09:40.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.474 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.474 issued rwts: total=17449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.474 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67448: Wed Dec 11 08:43:48 2024 00:09:40.474 read: IOPS=2748, BW=10.7MiB/s (11.3MB/s)(31.8MiB/2959msec) 00:09:40.474 slat (usec): min=8, max=282, avg=15.30, stdev= 6.05 00:09:40.474 clat (usec): min=226, max=2057, avg=347.19, stdev=42.06 00:09:40.474 lat (usec): min=239, max=2072, avg=362.49, stdev=41.97 00:09:40.475 clat percentiles (usec): 00:09:40.475 | 1.00th=[ 281], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 326], 00:09:40.475 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:09:40.475 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 383], 00:09:40.475 | 99.00th=[ 412], 99.50th=[ 465], 99.90th=[ 676], 99.95th=[ 996], 00:09:40.475 | 99.99th=[ 2057] 00:09:40.475 bw ( KiB/s): min=10880, max=11312, per=22.92%, avg=11041.60, stdev=197.86, samples=5 00:09:40.475 iops : min= 2720, max= 2828, avg=2760.40, stdev=49.47, samples=5 00:09:40.475 lat (usec) : 250=0.15%, 500=99.46%, 750=0.31%, 1000=0.02% 00:09:40.475 lat (msec) : 2=0.04%, 4=0.01% 00:09:40.475 cpu : usr=0.88%, sys=3.79%, ctx=8146, majf=0, minf=2 00:09:40.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.475 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.475 issued rwts: total=8132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.475 00:09:40.475 Run status group 0 (all jobs): 00:09:40.475 READ: bw=47.0MiB/s (49.3MB/s), 10.6MiB/s-20.7MiB/s (11.1MB/s-21.7MB/s), io=185MiB (194MB), run=2959-3926msec 00:09:40.475 00:09:40.475 Disk stats (read/write): 00:09:40.475 nvme0n1: ios=9741/0, merge=0/0, ticks=3400/0, in_queue=3400, util=95.57% 00:09:40.475 nvme0n2: ios=11484/0, merge=0/0, ticks=3616/0, in_queue=3616, util=95.83% 00:09:40.475 nvme0n3: ios=16469/0, merge=0/0, ticks=2964/0, in_queue=2964, util=96.30% 00:09:40.475 nvme0n4: ios=7888/0, merge=0/0, ticks=2687/0, in_queue=2687, util=96.79% 00:09:40.475 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.475 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:40.733 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.733 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:40.991 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.991 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:41.249 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.249 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:41.815 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.815 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67401 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.074 nvmf hotplug test: fio failed as expected 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:42.074 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.333 rmmod nvme_tcp 00:09:42.333 rmmod nvme_fabrics 00:09:42.333 rmmod nvme_keyring 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 67014 ']' 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 67014 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 67014 ']' 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 67014 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67014 00:09:42.333 killing process with pid 67014 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67014' 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 67014 00:09:42.333 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 67014 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.600 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:42.872 00:09:42.872 real 0m20.207s 00:09:42.872 user 1m15.831s 00:09:42.872 sys 0m10.437s 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.872 ************************************ 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.872 END TEST nvmf_fio_target 00:09:42.872 ************************************ 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.872 ************************************ 00:09:42.872 START TEST nvmf_bdevio 00:09:42.872 ************************************ 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:42.872 * Looking for test storage... 00:09:42.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.872 --rc genhtml_branch_coverage=1 00:09:42.872 --rc genhtml_function_coverage=1 00:09:42.872 --rc genhtml_legend=1 00:09:42.872 --rc geninfo_all_blocks=1 00:09:42.872 --rc geninfo_unexecuted_blocks=1 00:09:42.872 00:09:42.872 ' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.872 --rc genhtml_branch_coverage=1 00:09:42.872 --rc genhtml_function_coverage=1 00:09:42.872 --rc genhtml_legend=1 00:09:42.872 --rc geninfo_all_blocks=1 00:09:42.872 --rc geninfo_unexecuted_blocks=1 00:09:42.872 00:09:42.872 ' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.872 --rc genhtml_branch_coverage=1 00:09:42.872 --rc genhtml_function_coverage=1 00:09:42.872 --rc genhtml_legend=1 00:09:42.872 --rc geninfo_all_blocks=1 00:09:42.872 --rc geninfo_unexecuted_blocks=1 00:09:42.872 00:09:42.872 ' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.872 --rc genhtml_branch_coverage=1 00:09:42.872 --rc genhtml_function_coverage=1 00:09:42.872 --rc genhtml_legend=1 00:09:42.872 --rc geninfo_all_blocks=1 00:09:42.872 --rc geninfo_unexecuted_blocks=1 00:09:42.872 00:09:42.872 ' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.872 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.873 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.873 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:43.132 Cannot find device "nvmf_init_br" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:43.132 Cannot find device "nvmf_init_br2" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:43.132 Cannot find device "nvmf_tgt_br" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.132 Cannot find device "nvmf_tgt_br2" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:43.132 Cannot find device "nvmf_init_br" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:43.132 Cannot find device "nvmf_init_br2" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:43.132 Cannot find device "nvmf_tgt_br" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:43.132 Cannot find device "nvmf_tgt_br2" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:43.132 Cannot find device "nvmf_br" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:43.132 Cannot find device "nvmf_init_if" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:43.132 Cannot find device "nvmf_init_if2" 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.132 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.133 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:43.392 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.392 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:09:43.392 00:09:43.392 --- 10.0.0.3 ping statistics --- 00:09:43.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.392 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:43.392 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:43.392 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:43.392 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:43.392 00:09:43.392 --- 10.0.0.4 ping statistics --- 00:09:43.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.392 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:43.392 00:09:43.392 --- 10.0.0.1 ping statistics --- 00:09:43.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.392 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:43.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:43.392 00:09:43.392 --- 10.0.0.2 ping statistics --- 00:09:43.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.392 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:43.392 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67773 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67773 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67773 ']' 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.393 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.393 [2024-12-11 08:43:51.101729] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:09:43.393 [2024-12-11 08:43:51.101848] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.652 [2024-12-11 08:43:51.254824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.652 [2024-12-11 08:43:51.295590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.652 [2024-12-11 08:43:51.295661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.652 [2024-12-11 08:43:51.295684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.652 [2024-12-11 08:43:51.295694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.652 [2024-12-11 08:43:51.295703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.652 [2024-12-11 08:43:51.296905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.652 [2024-12-11 08:43:51.297064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.652 [2024-12-11 08:43:51.297288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:43.652 [2024-12-11 08:43:51.297289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.652 [2024-12-11 08:43:51.332278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.652 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.652 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:43.652 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.652 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.652 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 [2024-12-11 08:43:51.429042] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 Malloc0 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 [2024-12-11 08:43:51.501210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.911 { 00:09:43.911 "params": { 00:09:43.911 "name": "Nvme$subsystem", 00:09:43.911 "trtype": "$TEST_TRANSPORT", 00:09:43.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.911 "adrfam": "ipv4", 00:09:43.911 "trsvcid": "$NVMF_PORT", 00:09:43.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.911 "hdgst": ${hdgst:-false}, 00:09:43.911 "ddgst": ${ddgst:-false} 00:09:43.911 }, 00:09:43.911 "method": "bdev_nvme_attach_controller" 00:09:43.911 } 00:09:43.911 EOF 00:09:43.911 )") 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:43.911 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.911 "params": { 00:09:43.911 "name": "Nvme1", 00:09:43.911 "trtype": "tcp", 00:09:43.911 "traddr": "10.0.0.3", 00:09:43.911 "adrfam": "ipv4", 00:09:43.911 "trsvcid": "4420", 00:09:43.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.911 "hdgst": false, 00:09:43.911 "ddgst": false 00:09:43.911 }, 00:09:43.911 "method": "bdev_nvme_attach_controller" 00:09:43.911 }' 00:09:43.911 [2024-12-11 08:43:51.563682] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:09:43.911 [2024-12-11 08:43:51.563803] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67796 ] 00:09:44.171 [2024-12-11 08:43:51.712798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.171 [2024-12-11 08:43:51.746754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.171 [2024-12-11 08:43:51.747300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.171 [2024-12-11 08:43:51.747308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.171 [2024-12-11 08:43:51.785383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.171 I/O targets: 00:09:44.171 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:44.171 00:09:44.171 00:09:44.171 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.171 http://cunit.sourceforge.net/ 00:09:44.171 00:09:44.171 00:09:44.171 Suite: bdevio tests on: Nvme1n1 00:09:44.171 Test: blockdev write read block ...passed 00:09:44.171 Test: blockdev write zeroes read block ...passed 00:09:44.171 Test: blockdev write zeroes read no split ...passed 00:09:44.171 Test: blockdev write zeroes read split ...passed 00:09:44.171 Test: blockdev write zeroes read split partial ...passed 00:09:44.171 Test: blockdev reset ...[2024-12-11 08:43:51.921424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:44.171 [2024-12-11 08:43:51.921543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ceb30 (9): Bad file descriptor 00:09:44.171 [2024-12-11 08:43:51.937633] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:44.171 passed 00:09:44.171 Test: blockdev write read 8 blocks ...passed 00:09:44.171 Test: blockdev write read size > 128k ...passed 00:09:44.171 Test: blockdev write read invalid size ...passed 00:09:44.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.171 Test: blockdev write read max offset ...passed 00:09:44.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.171 Test: blockdev writev readv 8 blocks ...passed 00:09:44.171 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.171 Test: blockdev writev readv block ...passed 00:09:44.430 Test: blockdev writev readv size > 128k ...passed 00:09:44.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.430 Test: blockdev comparev and writev ...[2024-12-11 08:43:51.946180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.946376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.946631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.947091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.947294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.947394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.947503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.947976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.948080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.948209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.948321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.948781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.948894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.948991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.430 [2024-12-11 08:43:51.949059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:44.430 passed 00:09:44.430 Test: blockdev nvme passthru rw ...passed 00:09:44.430 Test: blockdev nvme passthru vendor specific ...[2024-12-11 08:43:51.949977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.430 [2024-12-11 08:43:51.950116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.950353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.430 [2024-12-11 08:43:51.950468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.950661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.430 [2024-12-11 08:43:51.950773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:44.430 [2024-12-11 08:43:51.950958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.430 [2024-12-11 08:43:51.951055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:44.430 passed 00:09:44.430 Test: blockdev nvme admin passthru ...passed 00:09:44.430 Test: blockdev copy ...passed 00:09:44.430 00:09:44.430 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.430 suites 1 1 n/a 0 0 00:09:44.430 tests 23 23 23 0 0 00:09:44.430 asserts 152 152 152 0 n/a 00:09:44.430 00:09:44.430 Elapsed time = 0.148 seconds 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.430 rmmod nvme_tcp 00:09:44.430 rmmod nvme_fabrics 00:09:44.430 rmmod nvme_keyring 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67773 ']' 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67773 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67773 ']' 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67773 00:09:44.430 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67773 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:44.690 killing process with pid 67773 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67773' 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67773 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67773 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.690 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.949 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.950 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:44.950 00:09:44.950 real 0m2.196s 00:09:44.950 user 0m5.475s 00:09:44.950 sys 0m0.789s 00:09:44.950 ************************************ 00:09:44.950 END TEST nvmf_bdevio 00:09:44.950 ************************************ 00:09:44.950 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.950 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.950 ************************************ 00:09:44.950 END TEST nvmf_target_core 00:09:44.950 ************************************ 00:09:44.950 08:43:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:44.950 00:09:44.950 real 2m30.383s 00:09:44.950 user 6m33.753s 00:09:44.950 sys 0m52.030s 00:09:44.950 08:43:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.950 08:43:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.950 08:43:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:44.950 08:43:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.950 08:43:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.950 08:43:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.209 ************************************ 00:09:45.209 START TEST nvmf_target_extra 00:09:45.209 ************************************ 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:45.209 * Looking for test storage... 00:09:45.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.209 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.210 --rc genhtml_branch_coverage=1 00:09:45.210 --rc genhtml_function_coverage=1 00:09:45.210 --rc genhtml_legend=1 00:09:45.210 --rc geninfo_all_blocks=1 00:09:45.210 --rc geninfo_unexecuted_blocks=1 00:09:45.210 00:09:45.210 ' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.210 --rc genhtml_branch_coverage=1 00:09:45.210 --rc genhtml_function_coverage=1 00:09:45.210 --rc genhtml_legend=1 00:09:45.210 --rc geninfo_all_blocks=1 00:09:45.210 --rc geninfo_unexecuted_blocks=1 00:09:45.210 00:09:45.210 ' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.210 --rc genhtml_branch_coverage=1 00:09:45.210 --rc genhtml_function_coverage=1 00:09:45.210 --rc genhtml_legend=1 00:09:45.210 --rc geninfo_all_blocks=1 00:09:45.210 --rc geninfo_unexecuted_blocks=1 00:09:45.210 00:09:45.210 ' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.210 --rc genhtml_branch_coverage=1 00:09:45.210 --rc genhtml_function_coverage=1 00:09:45.210 --rc genhtml_legend=1 00:09:45.210 --rc geninfo_all_blocks=1 00:09:45.210 --rc geninfo_unexecuted_blocks=1 00:09:45.210 00:09:45.210 ' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.210 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:45.210 ************************************ 00:09:45.210 START TEST nvmf_auth_target 00:09:45.210 ************************************ 00:09:45.210 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:45.470 * Looking for test storage... 00:09:45.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.470 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.471 --rc genhtml_branch_coverage=1 00:09:45.471 --rc genhtml_function_coverage=1 00:09:45.471 --rc genhtml_legend=1 00:09:45.471 --rc geninfo_all_blocks=1 00:09:45.471 --rc geninfo_unexecuted_blocks=1 00:09:45.471 00:09:45.471 ' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.471 --rc genhtml_branch_coverage=1 00:09:45.471 --rc genhtml_function_coverage=1 00:09:45.471 --rc genhtml_legend=1 00:09:45.471 --rc geninfo_all_blocks=1 00:09:45.471 --rc geninfo_unexecuted_blocks=1 00:09:45.471 00:09:45.471 ' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.471 --rc genhtml_branch_coverage=1 00:09:45.471 --rc genhtml_function_coverage=1 00:09:45.471 --rc genhtml_legend=1 00:09:45.471 --rc geninfo_all_blocks=1 00:09:45.471 --rc geninfo_unexecuted_blocks=1 00:09:45.471 00:09:45.471 ' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.471 --rc genhtml_branch_coverage=1 00:09:45.471 --rc genhtml_function_coverage=1 00:09:45.471 --rc genhtml_legend=1 00:09:45.471 --rc geninfo_all_blocks=1 00:09:45.471 --rc geninfo_unexecuted_blocks=1 00:09:45.471 00:09:45.471 ' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.471 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.471 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:45.472 Cannot find device "nvmf_init_br" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:45.472 Cannot find device "nvmf_init_br2" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:45.472 Cannot find device "nvmf_tgt_br" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.472 Cannot find device "nvmf_tgt_br2" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:45.472 Cannot find device "nvmf_init_br" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:45.472 Cannot find device "nvmf_init_br2" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:45.472 Cannot find device "nvmf_tgt_br" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:45.472 Cannot find device "nvmf_tgt_br2" 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:45.472 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:45.730 Cannot find device "nvmf_br" 00:09:45.730 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:45.730 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:45.730 Cannot find device "nvmf_init_if" 00:09:45.730 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:45.731 Cannot find device "nvmf_init_if2" 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.731 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:45.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:45.990 00:09:45.990 --- 10.0.0.3 ping statistics --- 00:09:45.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.990 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:45.990 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:45.990 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:09:45.990 00:09:45.990 --- 10.0.0.4 ping statistics --- 00:09:45.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.990 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:09:45.990 00:09:45.990 --- 10.0.0.1 ping statistics --- 00:09:45.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.990 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:45.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:09:45.990 00:09:45.990 --- 10.0.0.2 ping statistics --- 00:09:45.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.990 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=68079 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 68079 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68079 ']' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.990 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=68109 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c1cf8fcea84ce319ce91b24c33fcfde7256fec2e80c84556 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.R5K 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c1cf8fcea84ce319ce91b24c33fcfde7256fec2e80c84556 0 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c1cf8fcea84ce319ce91b24c33fcfde7256fec2e80c84556 0 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c1cf8fcea84ce319ce91b24c33fcfde7256fec2e80c84556 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:46.249 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.R5K 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.R5K 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.R5K 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fe897909dbfbcab87c1d7fbafde0a573a7616ce9aa2346bee6739bc4db71662 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gOC 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fe897909dbfbcab87c1d7fbafde0a573a7616ce9aa2346bee6739bc4db71662 3 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fe897909dbfbcab87c1d7fbafde0a573a7616ce9aa2346bee6739bc4db71662 3 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fe897909dbfbcab87c1d7fbafde0a573a7616ce9aa2346bee6739bc4db71662 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gOC 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gOC 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.gOC 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d63a5b7bf439bf7887f05f9d581fbd90 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.160 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d63a5b7bf439bf7887f05f9d581fbd90 1 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d63a5b7bf439bf7887f05f9d581fbd90 1 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d63a5b7bf439bf7887f05f9d581fbd90 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.160 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.160 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.160 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0509e984cb969c9afa619ba4f3e08dcf7e79b7140d2cf60d 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LaH 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0509e984cb969c9afa619ba4f3e08dcf7e79b7140d2cf60d 2 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0509e984cb969c9afa619ba4f3e08dcf7e79b7140d2cf60d 2 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.509 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0509e984cb969c9afa619ba4f3e08dcf7e79b7140d2cf60d 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LaH 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LaH 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.LaH 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=640ca17edff2bf3aa9fb99e287087655adb14af0400575b1 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.O7B 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 640ca17edff2bf3aa9fb99e287087655adb14af0400575b1 2 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 640ca17edff2bf3aa9fb99e287087655adb14af0400575b1 2 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=640ca17edff2bf3aa9fb99e287087655adb14af0400575b1 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:46.510 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.O7B 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.O7B 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.O7B 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0458deac26ce4e0e82260c5694737687 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.O2g 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0458deac26ce4e0e82260c5694737687 1 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0458deac26ce4e0e82260c5694737687 1 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0458deac26ce4e0e82260c5694737687 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:46.769 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.O2g 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.O2g 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.O2g 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=620d212abbf59f96ce6101ce0f13edc10c31370dfc18c80072c4ca995e8ac910 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ApX 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 620d212abbf59f96ce6101ce0f13edc10c31370dfc18c80072c4ca995e8ac910 3 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 620d212abbf59f96ce6101ce0f13edc10c31370dfc18c80072c4ca995e8ac910 3 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=620d212abbf59f96ce6101ce0f13edc10c31370dfc18c80072c4ca995e8ac910 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ApX 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ApX 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ApX 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 68079 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68079 ']' 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.770 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 68109 /var/tmp/host.sock 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68109 ']' 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.029 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.R5K 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.R5K 00:09:47.595 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.R5K 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.gOC ]] 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gOC 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gOC 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gOC 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.160 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.853 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.113 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.113 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.160 00:09:48.113 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.160 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.LaH ]] 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LaH 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LaH 00:09:48.373 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LaH 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O7B 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.O7B 00:09:48.632 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.O7B 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.O2g ]] 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O2g 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O2g 00:09:48.892 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O2g 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ApX 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ApX 00:09:49.151 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ApX 00:09:49.410 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:49.410 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:49.410 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:49.410 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.410 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.410 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.670 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.929 00:09:49.929 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.929 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.929 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.188 { 00:09:50.188 "cntlid": 1, 00:09:50.188 "qid": 0, 00:09:50.188 "state": "enabled", 00:09:50.188 "thread": "nvmf_tgt_poll_group_000", 00:09:50.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:09:50.188 "listen_address": { 00:09:50.188 "trtype": "TCP", 00:09:50.188 "adrfam": "IPv4", 00:09:50.188 "traddr": "10.0.0.3", 00:09:50.188 "trsvcid": "4420" 00:09:50.188 }, 00:09:50.188 "peer_address": { 00:09:50.188 "trtype": "TCP", 00:09:50.188 "adrfam": "IPv4", 00:09:50.188 "traddr": "10.0.0.1", 00:09:50.188 "trsvcid": "34854" 00:09:50.188 }, 00:09:50.188 "auth": { 00:09:50.188 "state": "completed", 00:09:50.188 "digest": "sha256", 00:09:50.188 "dhgroup": "null" 00:09:50.188 } 00:09:50.188 } 00:09:50.188 ]' 00:09:50.188 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.447 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.447 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.447 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:50.447 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.447 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.447 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.447 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.706 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:09:50.706 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.983 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.984 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.984 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.984 { 00:09:55.984 "cntlid": 3, 00:09:55.984 "qid": 0, 00:09:55.984 "state": "enabled", 00:09:55.984 "thread": "nvmf_tgt_poll_group_000", 00:09:55.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:09:55.984 "listen_address": { 00:09:55.984 "trtype": "TCP", 00:09:55.984 "adrfam": "IPv4", 00:09:55.984 "traddr": "10.0.0.3", 00:09:55.984 "trsvcid": "4420" 00:09:55.984 }, 00:09:55.984 "peer_address": { 00:09:55.984 "trtype": "TCP", 00:09:55.984 "adrfam": "IPv4", 00:09:55.984 "traddr": "10.0.0.1", 00:09:55.984 "trsvcid": "34884" 00:09:55.984 }, 00:09:55.984 "auth": { 00:09:55.984 "state": "completed", 00:09:55.984 "digest": "sha256", 00:09:55.984 "dhgroup": "null" 00:09:55.984 } 00:09:55.984 } 00:09:55.984 ]' 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:55.984 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.243 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.243 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.243 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.502 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:09:56.502 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:09:57.069 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.069 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:57.069 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.069 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.069 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.069 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.070 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.070 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.328 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.893 00:09:57.894 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.894 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.894 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:58.151 { 00:09:58.151 "cntlid": 5, 00:09:58.151 "qid": 0, 00:09:58.151 "state": "enabled", 00:09:58.151 "thread": "nvmf_tgt_poll_group_000", 00:09:58.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:09:58.151 "listen_address": { 00:09:58.151 "trtype": "TCP", 00:09:58.151 "adrfam": "IPv4", 00:09:58.151 "traddr": "10.0.0.3", 00:09:58.151 "trsvcid": "4420" 00:09:58.151 }, 00:09:58.151 "peer_address": { 00:09:58.151 "trtype": "TCP", 00:09:58.151 "adrfam": "IPv4", 00:09:58.151 "traddr": "10.0.0.1", 00:09:58.151 "trsvcid": "39682" 00:09:58.151 }, 00:09:58.151 "auth": { 00:09:58.151 "state": "completed", 00:09:58.151 "digest": "sha256", 00:09:58.151 "dhgroup": "null" 00:09:58.151 } 00:09:58.151 } 00:09:58.151 ]' 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:58.151 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:58.408 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.408 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.408 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.667 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:09:58.667 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.234 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:59.494 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:59.752 00:09:59.752 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.752 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.753 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.011 { 00:10:00.011 "cntlid": 7, 00:10:00.011 "qid": 0, 00:10:00.011 "state": "enabled", 00:10:00.011 "thread": "nvmf_tgt_poll_group_000", 00:10:00.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:00.011 "listen_address": { 00:10:00.011 "trtype": "TCP", 00:10:00.011 "adrfam": "IPv4", 00:10:00.011 "traddr": "10.0.0.3", 00:10:00.011 "trsvcid": "4420" 00:10:00.011 }, 00:10:00.011 "peer_address": { 00:10:00.011 "trtype": "TCP", 00:10:00.011 "adrfam": "IPv4", 00:10:00.011 "traddr": "10.0.0.1", 00:10:00.011 "trsvcid": "39708" 00:10:00.011 }, 00:10:00.011 "auth": { 00:10:00.011 "state": "completed", 00:10:00.011 "digest": "sha256", 00:10:00.011 "dhgroup": "null" 00:10:00.011 } 00:10:00.011 } 00:10:00.011 ]' 00:10:00.011 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.270 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.528 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:00.528 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.095 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.355 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.614 00:10:01.873 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.873 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.873 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:02.131 { 00:10:02.131 "cntlid": 9, 00:10:02.131 "qid": 0, 00:10:02.131 "state": "enabled", 00:10:02.131 "thread": "nvmf_tgt_poll_group_000", 00:10:02.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:02.131 "listen_address": { 00:10:02.131 "trtype": "TCP", 00:10:02.131 "adrfam": "IPv4", 00:10:02.131 "traddr": "10.0.0.3", 00:10:02.131 "trsvcid": "4420" 00:10:02.131 }, 00:10:02.131 "peer_address": { 00:10:02.131 "trtype": "TCP", 00:10:02.131 "adrfam": "IPv4", 00:10:02.131 "traddr": "10.0.0.1", 00:10:02.131 "trsvcid": "39734" 00:10:02.131 }, 00:10:02.131 "auth": { 00:10:02.131 "state": "completed", 00:10:02.131 "digest": "sha256", 00:10:02.131 "dhgroup": "ffdhe2048" 00:10:02.131 } 00:10:02.131 } 00:10:02.131 ]' 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.131 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.698 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:02.698 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.265 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.523 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:03.523 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.524 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.783 00:10:03.783 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.783 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.783 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.042 { 00:10:04.042 "cntlid": 11, 00:10:04.042 "qid": 0, 00:10:04.042 "state": "enabled", 00:10:04.042 "thread": "nvmf_tgt_poll_group_000", 00:10:04.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:04.042 "listen_address": { 00:10:04.042 "trtype": "TCP", 00:10:04.042 "adrfam": "IPv4", 00:10:04.042 "traddr": "10.0.0.3", 00:10:04.042 "trsvcid": "4420" 00:10:04.042 }, 00:10:04.042 "peer_address": { 00:10:04.042 "trtype": "TCP", 00:10:04.042 "adrfam": "IPv4", 00:10:04.042 "traddr": "10.0.0.1", 00:10:04.042 "trsvcid": "39752" 00:10:04.042 }, 00:10:04.042 "auth": { 00:10:04.042 "state": "completed", 00:10:04.042 "digest": "sha256", 00:10:04.042 "dhgroup": "ffdhe2048" 00:10:04.042 } 00:10:04.042 } 00:10:04.042 ]' 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:04.042 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.301 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.301 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.301 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.576 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:04.576 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.152 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.412 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.671 00:10:05.930 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.930 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.930 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.189 { 00:10:06.189 "cntlid": 13, 00:10:06.189 "qid": 0, 00:10:06.189 "state": "enabled", 00:10:06.189 "thread": "nvmf_tgt_poll_group_000", 00:10:06.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:06.189 "listen_address": { 00:10:06.189 "trtype": "TCP", 00:10:06.189 "adrfam": "IPv4", 00:10:06.189 "traddr": "10.0.0.3", 00:10:06.189 "trsvcid": "4420" 00:10:06.189 }, 00:10:06.189 "peer_address": { 00:10:06.189 "trtype": "TCP", 00:10:06.189 "adrfam": "IPv4", 00:10:06.189 "traddr": "10.0.0.1", 00:10:06.189 "trsvcid": "39774" 00:10:06.189 }, 00:10:06.189 "auth": { 00:10:06.189 "state": "completed", 00:10:06.189 "digest": "sha256", 00:10:06.189 "dhgroup": "ffdhe2048" 00:10:06.189 } 00:10:06.189 } 00:10:06.189 ]' 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.189 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.448 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:06.448 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.386 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.644 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.903 00:10:07.903 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.903 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.903 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.162 { 00:10:08.162 "cntlid": 15, 00:10:08.162 "qid": 0, 00:10:08.162 "state": "enabled", 00:10:08.162 "thread": "nvmf_tgt_poll_group_000", 00:10:08.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:08.162 "listen_address": { 00:10:08.162 "trtype": "TCP", 00:10:08.162 "adrfam": "IPv4", 00:10:08.162 "traddr": "10.0.0.3", 00:10:08.162 "trsvcid": "4420" 00:10:08.162 }, 00:10:08.162 "peer_address": { 00:10:08.162 "trtype": "TCP", 00:10:08.162 "adrfam": "IPv4", 00:10:08.162 "traddr": "10.0.0.1", 00:10:08.162 "trsvcid": "48336" 00:10:08.162 }, 00:10:08.162 "auth": { 00:10:08.162 "state": "completed", 00:10:08.162 "digest": "sha256", 00:10:08.162 "dhgroup": "ffdhe2048" 00:10:08.162 } 00:10:08.162 } 00:10:08.162 ]' 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.162 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.421 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:08.421 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.421 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.421 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.421 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.680 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:08.680 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:09.248 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.249 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.508 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:10.075 00:10:10.075 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.075 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.075 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.075 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.075 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.075 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.335 { 00:10:10.335 "cntlid": 17, 00:10:10.335 "qid": 0, 00:10:10.335 "state": "enabled", 00:10:10.335 "thread": "nvmf_tgt_poll_group_000", 00:10:10.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:10.335 "listen_address": { 00:10:10.335 "trtype": "TCP", 00:10:10.335 "adrfam": "IPv4", 00:10:10.335 "traddr": "10.0.0.3", 00:10:10.335 "trsvcid": "4420" 00:10:10.335 }, 00:10:10.335 "peer_address": { 00:10:10.335 "trtype": "TCP", 00:10:10.335 "adrfam": "IPv4", 00:10:10.335 "traddr": "10.0.0.1", 00:10:10.335 "trsvcid": "48350" 00:10:10.335 }, 00:10:10.335 "auth": { 00:10:10.335 "state": "completed", 00:10:10.335 "digest": "sha256", 00:10:10.335 "dhgroup": "ffdhe3072" 00:10:10.335 } 00:10:10.335 } 00:10:10.335 ]' 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:10.335 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.335 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.335 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.335 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.594 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:10.594 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:11.531 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.532 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.532 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.100 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.100 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.359 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.359 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.359 { 00:10:12.359 "cntlid": 19, 00:10:12.359 "qid": 0, 00:10:12.359 "state": "enabled", 00:10:12.359 "thread": "nvmf_tgt_poll_group_000", 00:10:12.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:12.359 "listen_address": { 00:10:12.359 "trtype": "TCP", 00:10:12.359 "adrfam": "IPv4", 00:10:12.360 "traddr": "10.0.0.3", 00:10:12.360 "trsvcid": "4420" 00:10:12.360 }, 00:10:12.360 "peer_address": { 00:10:12.360 "trtype": "TCP", 00:10:12.360 "adrfam": "IPv4", 00:10:12.360 "traddr": "10.0.0.1", 00:10:12.360 "trsvcid": "48380" 00:10:12.360 }, 00:10:12.360 "auth": { 00:10:12.360 "state": "completed", 00:10:12.360 "digest": "sha256", 00:10:12.360 "dhgroup": "ffdhe3072" 00:10:12.360 } 00:10:12.360 } 00:10:12.360 ]' 00:10:12.360 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.360 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.360 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.360 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:12.360 08:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.360 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.360 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.360 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.619 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:12.619 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.188 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.465 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.724 00:10:13.983 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.983 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.983 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.242 { 00:10:14.242 "cntlid": 21, 00:10:14.242 "qid": 0, 00:10:14.242 "state": "enabled", 00:10:14.242 "thread": "nvmf_tgt_poll_group_000", 00:10:14.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:14.242 "listen_address": { 00:10:14.242 "trtype": "TCP", 00:10:14.242 "adrfam": "IPv4", 00:10:14.242 "traddr": "10.0.0.3", 00:10:14.242 "trsvcid": "4420" 00:10:14.242 }, 00:10:14.242 "peer_address": { 00:10:14.242 "trtype": "TCP", 00:10:14.242 "adrfam": "IPv4", 00:10:14.242 "traddr": "10.0.0.1", 00:10:14.242 "trsvcid": "48388" 00:10:14.242 }, 00:10:14.242 "auth": { 00:10:14.242 "state": "completed", 00:10:14.242 "digest": "sha256", 00:10:14.242 "dhgroup": "ffdhe3072" 00:10:14.242 } 00:10:14.242 } 00:10:14.242 ]' 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.242 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.501 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:14.501 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.437 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.437 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.005 00:10:16.005 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.005 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.005 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.263 { 00:10:16.263 "cntlid": 23, 00:10:16.263 "qid": 0, 00:10:16.263 "state": "enabled", 00:10:16.263 "thread": "nvmf_tgt_poll_group_000", 00:10:16.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:16.263 "listen_address": { 00:10:16.263 "trtype": "TCP", 00:10:16.263 "adrfam": "IPv4", 00:10:16.263 "traddr": "10.0.0.3", 00:10:16.263 "trsvcid": "4420" 00:10:16.263 }, 00:10:16.263 "peer_address": { 00:10:16.263 "trtype": "TCP", 00:10:16.263 "adrfam": "IPv4", 00:10:16.263 "traddr": "10.0.0.1", 00:10:16.263 "trsvcid": "48426" 00:10:16.263 }, 00:10:16.263 "auth": { 00:10:16.263 "state": "completed", 00:10:16.263 "digest": "sha256", 00:10:16.263 "dhgroup": "ffdhe3072" 00:10:16.263 } 00:10:16.263 } 00:10:16.263 ]' 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:16.263 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.263 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.263 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.263 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.522 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:16.522 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.087 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.653 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.911 00:10:17.911 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.911 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.911 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.170 { 00:10:18.170 "cntlid": 25, 00:10:18.170 "qid": 0, 00:10:18.170 "state": "enabled", 00:10:18.170 "thread": "nvmf_tgt_poll_group_000", 00:10:18.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:18.170 "listen_address": { 00:10:18.170 "trtype": "TCP", 00:10:18.170 "adrfam": "IPv4", 00:10:18.170 "traddr": "10.0.0.3", 00:10:18.170 "trsvcid": "4420" 00:10:18.170 }, 00:10:18.170 "peer_address": { 00:10:18.170 "trtype": "TCP", 00:10:18.170 "adrfam": "IPv4", 00:10:18.170 "traddr": "10.0.0.1", 00:10:18.170 "trsvcid": "46048" 00:10:18.170 }, 00:10:18.170 "auth": { 00:10:18.170 "state": "completed", 00:10:18.170 "digest": "sha256", 00:10:18.170 "dhgroup": "ffdhe4096" 00:10:18.170 } 00:10:18.170 } 00:10:18.170 ]' 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:18.170 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.429 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.429 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.429 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.687 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:18.687 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.253 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.521 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.090 00:10:20.090 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.090 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.090 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.349 { 00:10:20.349 "cntlid": 27, 00:10:20.349 "qid": 0, 00:10:20.349 "state": "enabled", 00:10:20.349 "thread": "nvmf_tgt_poll_group_000", 00:10:20.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:20.349 "listen_address": { 00:10:20.349 "trtype": "TCP", 00:10:20.349 "adrfam": "IPv4", 00:10:20.349 "traddr": "10.0.0.3", 00:10:20.349 "trsvcid": "4420" 00:10:20.349 }, 00:10:20.349 "peer_address": { 00:10:20.349 "trtype": "TCP", 00:10:20.349 "adrfam": "IPv4", 00:10:20.349 "traddr": "10.0.0.1", 00:10:20.349 "trsvcid": "46080" 00:10:20.349 }, 00:10:20.349 "auth": { 00:10:20.349 "state": "completed", 00:10:20.349 "digest": "sha256", 00:10:20.349 "dhgroup": "ffdhe4096" 00:10:20.349 } 00:10:20.349 } 00:10:20.349 ]' 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.349 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.607 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:20.607 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.607 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.607 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.608 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.866 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:20.866 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.433 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.434 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.692 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.693 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.693 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.693 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.693 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.693 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.693 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.260 00:10:22.260 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.260 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.260 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.519 { 00:10:22.519 "cntlid": 29, 00:10:22.519 "qid": 0, 00:10:22.519 "state": "enabled", 00:10:22.519 "thread": "nvmf_tgt_poll_group_000", 00:10:22.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:22.519 "listen_address": { 00:10:22.519 "trtype": "TCP", 00:10:22.519 "adrfam": "IPv4", 00:10:22.519 "traddr": "10.0.0.3", 00:10:22.519 "trsvcid": "4420" 00:10:22.519 }, 00:10:22.519 "peer_address": { 00:10:22.519 "trtype": "TCP", 00:10:22.519 "adrfam": "IPv4", 00:10:22.519 "traddr": "10.0.0.1", 00:10:22.519 "trsvcid": "46112" 00:10:22.519 }, 00:10:22.519 "auth": { 00:10:22.519 "state": "completed", 00:10:22.519 "digest": "sha256", 00:10:22.519 "dhgroup": "ffdhe4096" 00:10:22.519 } 00:10:22.519 } 00:10:22.519 ]' 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.519 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.779 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:22.779 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.779 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.779 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.779 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.037 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:23.037 08:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:23.604 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.864 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.123 00:10:24.381 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.382 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.382 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.382 { 00:10:24.382 "cntlid": 31, 00:10:24.382 "qid": 0, 00:10:24.382 "state": "enabled", 00:10:24.382 "thread": "nvmf_tgt_poll_group_000", 00:10:24.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:24.382 "listen_address": { 00:10:24.382 "trtype": "TCP", 00:10:24.382 "adrfam": "IPv4", 00:10:24.382 "traddr": "10.0.0.3", 00:10:24.382 "trsvcid": "4420" 00:10:24.382 }, 00:10:24.382 "peer_address": { 00:10:24.382 "trtype": "TCP", 00:10:24.382 "adrfam": "IPv4", 00:10:24.382 "traddr": "10.0.0.1", 00:10:24.382 "trsvcid": "46136" 00:10:24.382 }, 00:10:24.382 "auth": { 00:10:24.382 "state": "completed", 00:10:24.382 "digest": "sha256", 00:10:24.382 "dhgroup": "ffdhe4096" 00:10:24.382 } 00:10:24.382 } 00:10:24.382 ]' 00:10:24.382 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.641 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.900 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:24.900 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:25.467 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.068 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.327 00:10:26.327 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.327 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.327 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.586 { 00:10:26.586 "cntlid": 33, 00:10:26.586 "qid": 0, 00:10:26.586 "state": "enabled", 00:10:26.586 "thread": "nvmf_tgt_poll_group_000", 00:10:26.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:26.586 "listen_address": { 00:10:26.586 "trtype": "TCP", 00:10:26.586 "adrfam": "IPv4", 00:10:26.586 "traddr": "10.0.0.3", 00:10:26.586 "trsvcid": "4420" 00:10:26.586 }, 00:10:26.586 "peer_address": { 00:10:26.586 "trtype": "TCP", 00:10:26.586 "adrfam": "IPv4", 00:10:26.586 "traddr": "10.0.0.1", 00:10:26.586 "trsvcid": "46156" 00:10:26.586 }, 00:10:26.586 "auth": { 00:10:26.586 "state": "completed", 00:10:26.586 "digest": "sha256", 00:10:26.586 "dhgroup": "ffdhe6144" 00:10:26.586 } 00:10:26.586 } 00:10:26.586 ]' 00:10:26.586 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.844 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.103 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:27.103 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:27.676 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.934 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.502 00:10:28.502 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.502 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.502 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.761 { 00:10:28.761 "cntlid": 35, 00:10:28.761 "qid": 0, 00:10:28.761 "state": "enabled", 00:10:28.761 "thread": "nvmf_tgt_poll_group_000", 00:10:28.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:28.761 "listen_address": { 00:10:28.761 "trtype": "TCP", 00:10:28.761 "adrfam": "IPv4", 00:10:28.761 "traddr": "10.0.0.3", 00:10:28.761 "trsvcid": "4420" 00:10:28.761 }, 00:10:28.761 "peer_address": { 00:10:28.761 "trtype": "TCP", 00:10:28.761 "adrfam": "IPv4", 00:10:28.761 "traddr": "10.0.0.1", 00:10:28.761 "trsvcid": "54764" 00:10:28.761 }, 00:10:28.761 "auth": { 00:10:28.761 "state": "completed", 00:10:28.761 "digest": "sha256", 00:10:28.761 "dhgroup": "ffdhe6144" 00:10:28.761 } 00:10:28.761 } 00:10:28.761 ]' 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:28.761 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.020 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.020 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.020 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.279 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:29.279 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:29.847 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.106 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.673 00:10:30.673 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.673 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.673 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.932 { 00:10:30.932 "cntlid": 37, 00:10:30.932 "qid": 0, 00:10:30.932 "state": "enabled", 00:10:30.932 "thread": "nvmf_tgt_poll_group_000", 00:10:30.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:30.932 "listen_address": { 00:10:30.932 "trtype": "TCP", 00:10:30.932 "adrfam": "IPv4", 00:10:30.932 "traddr": "10.0.0.3", 00:10:30.932 "trsvcid": "4420" 00:10:30.932 }, 00:10:30.932 "peer_address": { 00:10:30.932 "trtype": "TCP", 00:10:30.932 "adrfam": "IPv4", 00:10:30.932 "traddr": "10.0.0.1", 00:10:30.932 "trsvcid": "54798" 00:10:30.932 }, 00:10:30.932 "auth": { 00:10:30.932 "state": "completed", 00:10:30.932 "digest": "sha256", 00:10:30.932 "dhgroup": "ffdhe6144" 00:10:30.932 } 00:10:30.932 } 00:10:30.932 ]' 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.932 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.500 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:31.500 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:32.067 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:32.327 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:32.894 00:10:32.894 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.894 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.894 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.153 { 00:10:33.153 "cntlid": 39, 00:10:33.153 "qid": 0, 00:10:33.153 "state": "enabled", 00:10:33.153 "thread": "nvmf_tgt_poll_group_000", 00:10:33.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:33.153 "listen_address": { 00:10:33.153 "trtype": "TCP", 00:10:33.153 "adrfam": "IPv4", 00:10:33.153 "traddr": "10.0.0.3", 00:10:33.153 "trsvcid": "4420" 00:10:33.153 }, 00:10:33.153 "peer_address": { 00:10:33.153 "trtype": "TCP", 00:10:33.153 "adrfam": "IPv4", 00:10:33.153 "traddr": "10.0.0.1", 00:10:33.153 "trsvcid": "54826" 00:10:33.153 }, 00:10:33.153 "auth": { 00:10:33.153 "state": "completed", 00:10:33.153 "digest": "sha256", 00:10:33.153 "dhgroup": "ffdhe6144" 00:10:33.153 } 00:10:33.153 } 00:10:33.153 ]' 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:33.153 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.411 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.411 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.412 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.670 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:33.670 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.237 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.496 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.063 00:10:35.063 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.063 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.063 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.322 { 00:10:35.322 "cntlid": 41, 00:10:35.322 "qid": 0, 00:10:35.322 "state": "enabled", 00:10:35.322 "thread": "nvmf_tgt_poll_group_000", 00:10:35.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:35.322 "listen_address": { 00:10:35.322 "trtype": "TCP", 00:10:35.322 "adrfam": "IPv4", 00:10:35.322 "traddr": "10.0.0.3", 00:10:35.322 "trsvcid": "4420" 00:10:35.322 }, 00:10:35.322 "peer_address": { 00:10:35.322 "trtype": "TCP", 00:10:35.322 "adrfam": "IPv4", 00:10:35.322 "traddr": "10.0.0.1", 00:10:35.322 "trsvcid": "54850" 00:10:35.322 }, 00:10:35.322 "auth": { 00:10:35.322 "state": "completed", 00:10:35.322 "digest": "sha256", 00:10:35.322 "dhgroup": "ffdhe8192" 00:10:35.322 } 00:10:35.322 } 00:10:35.322 ]' 00:10:35.322 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.580 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.580 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.580 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:35.580 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.580 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.580 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.581 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.839 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:35.839 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:36.406 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.986 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.572 00:10:37.572 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.572 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.572 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.830 { 00:10:37.830 "cntlid": 43, 00:10:37.830 "qid": 0, 00:10:37.830 "state": "enabled", 00:10:37.830 "thread": "nvmf_tgt_poll_group_000", 00:10:37.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:37.830 "listen_address": { 00:10:37.830 "trtype": "TCP", 00:10:37.830 "adrfam": "IPv4", 00:10:37.830 "traddr": "10.0.0.3", 00:10:37.830 "trsvcid": "4420" 00:10:37.830 }, 00:10:37.830 "peer_address": { 00:10:37.830 "trtype": "TCP", 00:10:37.830 "adrfam": "IPv4", 00:10:37.830 "traddr": "10.0.0.1", 00:10:37.830 "trsvcid": "54876" 00:10:37.830 }, 00:10:37.830 "auth": { 00:10:37.830 "state": "completed", 00:10:37.830 "digest": "sha256", 00:10:37.830 "dhgroup": "ffdhe8192" 00:10:37.830 } 00:10:37.830 } 00:10:37.830 ]' 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:37.830 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.089 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.089 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.089 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.347 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:38.348 08:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:38.915 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.174 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.741 00:10:39.999 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.999 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.999 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.258 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.258 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.258 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.258 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.259 { 00:10:40.259 "cntlid": 45, 00:10:40.259 "qid": 0, 00:10:40.259 "state": "enabled", 00:10:40.259 "thread": "nvmf_tgt_poll_group_000", 00:10:40.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:40.259 "listen_address": { 00:10:40.259 "trtype": "TCP", 00:10:40.259 "adrfam": "IPv4", 00:10:40.259 "traddr": "10.0.0.3", 00:10:40.259 "trsvcid": "4420" 00:10:40.259 }, 00:10:40.259 "peer_address": { 00:10:40.259 "trtype": "TCP", 00:10:40.259 "adrfam": "IPv4", 00:10:40.259 "traddr": "10.0.0.1", 00:10:40.259 "trsvcid": "57650" 00:10:40.259 }, 00:10:40.259 "auth": { 00:10:40.259 "state": "completed", 00:10:40.259 "digest": "sha256", 00:10:40.259 "dhgroup": "ffdhe8192" 00:10:40.259 } 00:10:40.259 } 00:10:40.259 ]' 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.259 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.517 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:40.517 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:41.454 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:41.713 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.281 00:10:42.281 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.281 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.281 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.540 { 00:10:42.540 "cntlid": 47, 00:10:42.540 "qid": 0, 00:10:42.540 "state": "enabled", 00:10:42.540 "thread": "nvmf_tgt_poll_group_000", 00:10:42.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:42.540 "listen_address": { 00:10:42.540 "trtype": "TCP", 00:10:42.540 "adrfam": "IPv4", 00:10:42.540 "traddr": "10.0.0.3", 00:10:42.540 "trsvcid": "4420" 00:10:42.540 }, 00:10:42.540 "peer_address": { 00:10:42.540 "trtype": "TCP", 00:10:42.540 "adrfam": "IPv4", 00:10:42.540 "traddr": "10.0.0.1", 00:10:42.540 "trsvcid": "57666" 00:10:42.540 }, 00:10:42.540 "auth": { 00:10:42.540 "state": "completed", 00:10:42.540 "digest": "sha256", 00:10:42.540 "dhgroup": "ffdhe8192" 00:10:42.540 } 00:10:42.540 } 00:10:42.540 ]' 00:10:42.540 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.800 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.058 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:43.058 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:43.625 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.884 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.452 00:10:44.452 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.452 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.452 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.711 { 00:10:44.711 "cntlid": 49, 00:10:44.711 "qid": 0, 00:10:44.711 "state": "enabled", 00:10:44.711 "thread": "nvmf_tgt_poll_group_000", 00:10:44.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:44.711 "listen_address": { 00:10:44.711 "trtype": "TCP", 00:10:44.711 "adrfam": "IPv4", 00:10:44.711 "traddr": "10.0.0.3", 00:10:44.711 "trsvcid": "4420" 00:10:44.711 }, 00:10:44.711 "peer_address": { 00:10:44.711 "trtype": "TCP", 00:10:44.711 "adrfam": "IPv4", 00:10:44.711 "traddr": "10.0.0.1", 00:10:44.711 "trsvcid": "57700" 00:10:44.711 }, 00:10:44.711 "auth": { 00:10:44.711 "state": "completed", 00:10:44.711 "digest": "sha384", 00:10:44.711 "dhgroup": "null" 00:10:44.711 } 00:10:44.711 } 00:10:44.711 ]' 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.711 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.970 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:44.970 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:45.907 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.166 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.425 00:10:46.425 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.425 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.425 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.684 { 00:10:46.684 "cntlid": 51, 00:10:46.684 "qid": 0, 00:10:46.684 "state": "enabled", 00:10:46.684 "thread": "nvmf_tgt_poll_group_000", 00:10:46.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:46.684 "listen_address": { 00:10:46.684 "trtype": "TCP", 00:10:46.684 "adrfam": "IPv4", 00:10:46.684 "traddr": "10.0.0.3", 00:10:46.684 "trsvcid": "4420" 00:10:46.684 }, 00:10:46.684 "peer_address": { 00:10:46.684 "trtype": "TCP", 00:10:46.684 "adrfam": "IPv4", 00:10:46.684 "traddr": "10.0.0.1", 00:10:46.684 "trsvcid": "57730" 00:10:46.684 }, 00:10:46.684 "auth": { 00:10:46.684 "state": "completed", 00:10:46.684 "digest": "sha384", 00:10:46.684 "dhgroup": "null" 00:10:46.684 } 00:10:46.684 } 00:10:46.684 ]' 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:46.684 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.943 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.943 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.943 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.201 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:47.201 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.768 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.028 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.341 00:10:48.341 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.341 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.341 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.599 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.599 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.599 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.599 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.599 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.859 { 00:10:48.859 "cntlid": 53, 00:10:48.859 "qid": 0, 00:10:48.859 "state": "enabled", 00:10:48.859 "thread": "nvmf_tgt_poll_group_000", 00:10:48.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:48.859 "listen_address": { 00:10:48.859 "trtype": "TCP", 00:10:48.859 "adrfam": "IPv4", 00:10:48.859 "traddr": "10.0.0.3", 00:10:48.859 "trsvcid": "4420" 00:10:48.859 }, 00:10:48.859 "peer_address": { 00:10:48.859 "trtype": "TCP", 00:10:48.859 "adrfam": "IPv4", 00:10:48.859 "traddr": "10.0.0.1", 00:10:48.859 "trsvcid": "43354" 00:10:48.859 }, 00:10:48.859 "auth": { 00:10:48.859 "state": "completed", 00:10:48.859 "digest": "sha384", 00:10:48.859 "dhgroup": "null" 00:10:48.859 } 00:10:48.859 } 00:10:48.859 ]' 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.859 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.118 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:49.118 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.055 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.314 00:10:50.314 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.314 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.314 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.883 { 00:10:50.883 "cntlid": 55, 00:10:50.883 "qid": 0, 00:10:50.883 "state": "enabled", 00:10:50.883 "thread": "nvmf_tgt_poll_group_000", 00:10:50.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:50.883 "listen_address": { 00:10:50.883 "trtype": "TCP", 00:10:50.883 "adrfam": "IPv4", 00:10:50.883 "traddr": "10.0.0.3", 00:10:50.883 "trsvcid": "4420" 00:10:50.883 }, 00:10:50.883 "peer_address": { 00:10:50.883 "trtype": "TCP", 00:10:50.883 "adrfam": "IPv4", 00:10:50.883 "traddr": "10.0.0.1", 00:10:50.883 "trsvcid": "43382" 00:10:50.883 }, 00:10:50.883 "auth": { 00:10:50.883 "state": "completed", 00:10:50.883 "digest": "sha384", 00:10:50.883 "dhgroup": "null" 00:10:50.883 } 00:10:50.883 } 00:10:50.883 ]' 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.883 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.142 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:51.142 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:51.710 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.710 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:51.710 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.710 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.969 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.969 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:51.969 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.969 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:51.969 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.228 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.488 00:10:52.488 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.488 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.488 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.747 { 00:10:52.747 "cntlid": 57, 00:10:52.747 "qid": 0, 00:10:52.747 "state": "enabled", 00:10:52.747 "thread": "nvmf_tgt_poll_group_000", 00:10:52.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:52.747 "listen_address": { 00:10:52.747 "trtype": "TCP", 00:10:52.747 "adrfam": "IPv4", 00:10:52.747 "traddr": "10.0.0.3", 00:10:52.747 "trsvcid": "4420" 00:10:52.747 }, 00:10:52.747 "peer_address": { 00:10:52.747 "trtype": "TCP", 00:10:52.747 "adrfam": "IPv4", 00:10:52.747 "traddr": "10.0.0.1", 00:10:52.747 "trsvcid": "43398" 00:10:52.747 }, 00:10:52.747 "auth": { 00:10:52.747 "state": "completed", 00:10:52.747 "digest": "sha384", 00:10:52.747 "dhgroup": "ffdhe2048" 00:10:52.747 } 00:10:52.747 } 00:10:52.747 ]' 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:52.747 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.005 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.005 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.005 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.264 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:53.265 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.833 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.093 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.352 00:10:54.352 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.352 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.352 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.611 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.611 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.611 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.611 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.870 { 00:10:54.870 "cntlid": 59, 00:10:54.870 "qid": 0, 00:10:54.870 "state": "enabled", 00:10:54.870 "thread": "nvmf_tgt_poll_group_000", 00:10:54.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:54.870 "listen_address": { 00:10:54.870 "trtype": "TCP", 00:10:54.870 "adrfam": "IPv4", 00:10:54.870 "traddr": "10.0.0.3", 00:10:54.870 "trsvcid": "4420" 00:10:54.870 }, 00:10:54.870 "peer_address": { 00:10:54.870 "trtype": "TCP", 00:10:54.870 "adrfam": "IPv4", 00:10:54.870 "traddr": "10.0.0.1", 00:10:54.870 "trsvcid": "43424" 00:10:54.870 }, 00:10:54.870 "auth": { 00:10:54.870 "state": "completed", 00:10:54.870 "digest": "sha384", 00:10:54.870 "dhgroup": "ffdhe2048" 00:10:54.870 } 00:10:54.870 } 00:10:54.870 ]' 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.870 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.128 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:55.128 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:56.065 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.325 08:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.584 00:10:56.584 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.584 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.584 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.843 { 00:10:56.843 "cntlid": 61, 00:10:56.843 "qid": 0, 00:10:56.843 "state": "enabled", 00:10:56.843 "thread": "nvmf_tgt_poll_group_000", 00:10:56.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:56.843 "listen_address": { 00:10:56.843 "trtype": "TCP", 00:10:56.843 "adrfam": "IPv4", 00:10:56.843 "traddr": "10.0.0.3", 00:10:56.843 "trsvcid": "4420" 00:10:56.843 }, 00:10:56.843 "peer_address": { 00:10:56.843 "trtype": "TCP", 00:10:56.843 "adrfam": "IPv4", 00:10:56.843 "traddr": "10.0.0.1", 00:10:56.843 "trsvcid": "43450" 00:10:56.843 }, 00:10:56.843 "auth": { 00:10:56.843 "state": "completed", 00:10:56.843 "digest": "sha384", 00:10:56.843 "dhgroup": "ffdhe2048" 00:10:56.843 } 00:10:56.843 } 00:10:56.843 ]' 00:10:56.843 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.102 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.361 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:57.362 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.928 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.496 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.755 00:10:58.755 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.755 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.755 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.059 { 00:10:59.059 "cntlid": 63, 00:10:59.059 "qid": 0, 00:10:59.059 "state": "enabled", 00:10:59.059 "thread": "nvmf_tgt_poll_group_000", 00:10:59.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:10:59.059 "listen_address": { 00:10:59.059 "trtype": "TCP", 00:10:59.059 "adrfam": "IPv4", 00:10:59.059 "traddr": "10.0.0.3", 00:10:59.059 "trsvcid": "4420" 00:10:59.059 }, 00:10:59.059 "peer_address": { 00:10:59.059 "trtype": "TCP", 00:10:59.059 "adrfam": "IPv4", 00:10:59.059 "traddr": "10.0.0.1", 00:10:59.059 "trsvcid": "42048" 00:10:59.059 }, 00:10:59.059 "auth": { 00:10:59.059 "state": "completed", 00:10:59.059 "digest": "sha384", 00:10:59.059 "dhgroup": "ffdhe2048" 00:10:59.059 } 00:10:59.059 } 00:10:59.059 ]' 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:59.059 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.339 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.339 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.339 08:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.598 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:10:59.598 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:00.165 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.165 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:00.165 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.165 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.166 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.166 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.166 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.166 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.425 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.684 00:11:00.684 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.684 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.684 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.943 { 00:11:00.943 "cntlid": 65, 00:11:00.943 "qid": 0, 00:11:00.943 "state": "enabled", 00:11:00.943 "thread": "nvmf_tgt_poll_group_000", 00:11:00.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:00.943 "listen_address": { 00:11:00.943 "trtype": "TCP", 00:11:00.943 "adrfam": "IPv4", 00:11:00.943 "traddr": "10.0.0.3", 00:11:00.943 "trsvcid": "4420" 00:11:00.943 }, 00:11:00.943 "peer_address": { 00:11:00.943 "trtype": "TCP", 00:11:00.943 "adrfam": "IPv4", 00:11:00.943 "traddr": "10.0.0.1", 00:11:00.943 "trsvcid": "42082" 00:11:00.943 }, 00:11:00.943 "auth": { 00:11:00.943 "state": "completed", 00:11:00.943 "digest": "sha384", 00:11:00.943 "dhgroup": "ffdhe3072" 00:11:00.943 } 00:11:00.943 } 00:11:00.943 ]' 00:11:00.943 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.202 08:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.461 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:01.461 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:02.027 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:02.287 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.547 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.806 00:11:02.806 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.806 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.806 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.374 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.374 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.374 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.374 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.374 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.374 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.374 { 00:11:03.374 "cntlid": 67, 00:11:03.374 "qid": 0, 00:11:03.374 "state": "enabled", 00:11:03.374 "thread": "nvmf_tgt_poll_group_000", 00:11:03.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:03.374 "listen_address": { 00:11:03.375 "trtype": "TCP", 00:11:03.375 "adrfam": "IPv4", 00:11:03.375 "traddr": "10.0.0.3", 00:11:03.375 "trsvcid": "4420" 00:11:03.375 }, 00:11:03.375 "peer_address": { 00:11:03.375 "trtype": "TCP", 00:11:03.375 "adrfam": "IPv4", 00:11:03.375 "traddr": "10.0.0.1", 00:11:03.375 "trsvcid": "42102" 00:11:03.375 }, 00:11:03.375 "auth": { 00:11:03.375 "state": "completed", 00:11:03.375 "digest": "sha384", 00:11:03.375 "dhgroup": "ffdhe3072" 00:11:03.375 } 00:11:03.375 } 00:11:03.375 ]' 00:11:03.375 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.375 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.375 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.375 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:03.375 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.375 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.375 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.375 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.633 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:03.633 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.569 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.137 00:11:05.137 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.137 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.137 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.395 { 00:11:05.395 "cntlid": 69, 00:11:05.395 "qid": 0, 00:11:05.395 "state": "enabled", 00:11:05.395 "thread": "nvmf_tgt_poll_group_000", 00:11:05.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:05.395 "listen_address": { 00:11:05.395 "trtype": "TCP", 00:11:05.395 "adrfam": "IPv4", 00:11:05.395 "traddr": "10.0.0.3", 00:11:05.395 "trsvcid": "4420" 00:11:05.395 }, 00:11:05.395 "peer_address": { 00:11:05.395 "trtype": "TCP", 00:11:05.395 "adrfam": "IPv4", 00:11:05.395 "traddr": "10.0.0.1", 00:11:05.395 "trsvcid": "42146" 00:11:05.395 }, 00:11:05.395 "auth": { 00:11:05.395 "state": "completed", 00:11:05.395 "digest": "sha384", 00:11:05.395 "dhgroup": "ffdhe3072" 00:11:05.395 } 00:11:05.395 } 00:11:05.395 ]' 00:11:05.395 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.395 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.654 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:05.654 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:06.590 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.591 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.158 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.158 { 00:11:07.158 "cntlid": 71, 00:11:07.158 "qid": 0, 00:11:07.158 "state": "enabled", 00:11:07.158 "thread": "nvmf_tgt_poll_group_000", 00:11:07.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:07.158 "listen_address": { 00:11:07.158 "trtype": "TCP", 00:11:07.158 "adrfam": "IPv4", 00:11:07.158 "traddr": "10.0.0.3", 00:11:07.158 "trsvcid": "4420" 00:11:07.158 }, 00:11:07.158 "peer_address": { 00:11:07.158 "trtype": "TCP", 00:11:07.158 "adrfam": "IPv4", 00:11:07.158 "traddr": "10.0.0.1", 00:11:07.158 "trsvcid": "42186" 00:11:07.158 }, 00:11:07.158 "auth": { 00:11:07.158 "state": "completed", 00:11:07.158 "digest": "sha384", 00:11:07.158 "dhgroup": "ffdhe3072" 00:11:07.158 } 00:11:07.158 } 00:11:07.158 ]' 00:11:07.158 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.417 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.417 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.417 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:07.417 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.417 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.417 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.417 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.675 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:07.675 08:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.611 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.178 00:11:09.178 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.178 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.178 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.437 { 00:11:09.437 "cntlid": 73, 00:11:09.437 "qid": 0, 00:11:09.437 "state": "enabled", 00:11:09.437 "thread": "nvmf_tgt_poll_group_000", 00:11:09.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:09.437 "listen_address": { 00:11:09.437 "trtype": "TCP", 00:11:09.437 "adrfam": "IPv4", 00:11:09.437 "traddr": "10.0.0.3", 00:11:09.437 "trsvcid": "4420" 00:11:09.437 }, 00:11:09.437 "peer_address": { 00:11:09.437 "trtype": "TCP", 00:11:09.437 "adrfam": "IPv4", 00:11:09.437 "traddr": "10.0.0.1", 00:11:09.437 "trsvcid": "57028" 00:11:09.437 }, 00:11:09.437 "auth": { 00:11:09.437 "state": "completed", 00:11:09.437 "digest": "sha384", 00:11:09.437 "dhgroup": "ffdhe4096" 00:11:09.437 } 00:11:09.437 } 00:11:09.437 ]' 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:09.437 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.715 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.715 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.715 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.981 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:09.981 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.549 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.808 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:10.808 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.808 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.808 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:10.808 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.809 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.067 00:11:11.067 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.067 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.067 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.326 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.326 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.326 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.326 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.585 { 00:11:11.585 "cntlid": 75, 00:11:11.585 "qid": 0, 00:11:11.585 "state": "enabled", 00:11:11.585 "thread": "nvmf_tgt_poll_group_000", 00:11:11.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:11.585 "listen_address": { 00:11:11.585 "trtype": "TCP", 00:11:11.585 "adrfam": "IPv4", 00:11:11.585 "traddr": "10.0.0.3", 00:11:11.585 "trsvcid": "4420" 00:11:11.585 }, 00:11:11.585 "peer_address": { 00:11:11.585 "trtype": "TCP", 00:11:11.585 "adrfam": "IPv4", 00:11:11.585 "traddr": "10.0.0.1", 00:11:11.585 "trsvcid": "57056" 00:11:11.585 }, 00:11:11.585 "auth": { 00:11:11.585 "state": "completed", 00:11:11.585 "digest": "sha384", 00:11:11.585 "dhgroup": "ffdhe4096" 00:11:11.585 } 00:11:11.585 } 00:11:11.585 ]' 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.585 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.844 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:11.844 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:12.780 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.781 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.348 00:11:13.348 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.348 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.348 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.607 { 00:11:13.607 "cntlid": 77, 00:11:13.607 "qid": 0, 00:11:13.607 "state": "enabled", 00:11:13.607 "thread": "nvmf_tgt_poll_group_000", 00:11:13.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:13.607 "listen_address": { 00:11:13.607 "trtype": "TCP", 00:11:13.607 "adrfam": "IPv4", 00:11:13.607 "traddr": "10.0.0.3", 00:11:13.607 "trsvcid": "4420" 00:11:13.607 }, 00:11:13.607 "peer_address": { 00:11:13.607 "trtype": "TCP", 00:11:13.607 "adrfam": "IPv4", 00:11:13.607 "traddr": "10.0.0.1", 00:11:13.607 "trsvcid": "57090" 00:11:13.607 }, 00:11:13.607 "auth": { 00:11:13.607 "state": "completed", 00:11:13.607 "digest": "sha384", 00:11:13.607 "dhgroup": "ffdhe4096" 00:11:13.607 } 00:11:13.607 } 00:11:13.607 ]' 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.607 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.865 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.865 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.865 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.124 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:14.124 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:14.691 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:14.950 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:14.950 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.950 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.950 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:14.950 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:14.950 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.951 08:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.518 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.518 { 00:11:15.518 "cntlid": 79, 00:11:15.518 "qid": 0, 00:11:15.518 "state": "enabled", 00:11:15.518 "thread": "nvmf_tgt_poll_group_000", 00:11:15.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:15.518 "listen_address": { 00:11:15.518 "trtype": "TCP", 00:11:15.518 "adrfam": "IPv4", 00:11:15.518 "traddr": "10.0.0.3", 00:11:15.518 "trsvcid": "4420" 00:11:15.518 }, 00:11:15.518 "peer_address": { 00:11:15.518 "trtype": "TCP", 00:11:15.518 "adrfam": "IPv4", 00:11:15.518 "traddr": "10.0.0.1", 00:11:15.518 "trsvcid": "57124" 00:11:15.518 }, 00:11:15.518 "auth": { 00:11:15.518 "state": "completed", 00:11:15.518 "digest": "sha384", 00:11:15.518 "dhgroup": "ffdhe4096" 00:11:15.518 } 00:11:15.518 } 00:11:15.518 ]' 00:11:15.518 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.777 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.036 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:16.036 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:16.603 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.603 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:16.603 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.603 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.604 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.604 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.604 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.604 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.604 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.863 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.433 00:11:17.433 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.433 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.433 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.694 { 00:11:17.694 "cntlid": 81, 00:11:17.694 "qid": 0, 00:11:17.694 "state": "enabled", 00:11:17.694 "thread": "nvmf_tgt_poll_group_000", 00:11:17.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:17.694 "listen_address": { 00:11:17.694 "trtype": "TCP", 00:11:17.694 "adrfam": "IPv4", 00:11:17.694 "traddr": "10.0.0.3", 00:11:17.694 "trsvcid": "4420" 00:11:17.694 }, 00:11:17.694 "peer_address": { 00:11:17.694 "trtype": "TCP", 00:11:17.694 "adrfam": "IPv4", 00:11:17.694 "traddr": "10.0.0.1", 00:11:17.694 "trsvcid": "57146" 00:11:17.694 }, 00:11:17.694 "auth": { 00:11:17.694 "state": "completed", 00:11:17.694 "digest": "sha384", 00:11:17.694 "dhgroup": "ffdhe6144" 00:11:17.694 } 00:11:17.694 } 00:11:17.694 ]' 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.694 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.952 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:17.952 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.952 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.952 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.952 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.211 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:18.211 08:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.146 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.714 00:11:19.714 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.714 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.714 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.973 { 00:11:19.973 "cntlid": 83, 00:11:19.973 "qid": 0, 00:11:19.973 "state": "enabled", 00:11:19.973 "thread": "nvmf_tgt_poll_group_000", 00:11:19.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:19.973 "listen_address": { 00:11:19.973 "trtype": "TCP", 00:11:19.973 "adrfam": "IPv4", 00:11:19.973 "traddr": "10.0.0.3", 00:11:19.973 "trsvcid": "4420" 00:11:19.973 }, 00:11:19.973 "peer_address": { 00:11:19.973 "trtype": "TCP", 00:11:19.973 "adrfam": "IPv4", 00:11:19.973 "traddr": "10.0.0.1", 00:11:19.973 "trsvcid": "39948" 00:11:19.973 }, 00:11:19.973 "auth": { 00:11:19.973 "state": "completed", 00:11:19.973 "digest": "sha384", 00:11:19.973 "dhgroup": "ffdhe6144" 00:11:19.973 } 00:11:19.973 } 00:11:19.973 ]' 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.973 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.232 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.232 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.232 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:20.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:21.059 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.318 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.885 00:11:21.885 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.885 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.885 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.144 { 00:11:22.144 "cntlid": 85, 00:11:22.144 "qid": 0, 00:11:22.144 "state": "enabled", 00:11:22.144 "thread": "nvmf_tgt_poll_group_000", 00:11:22.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:22.144 "listen_address": { 00:11:22.144 "trtype": "TCP", 00:11:22.144 "adrfam": "IPv4", 00:11:22.144 "traddr": "10.0.0.3", 00:11:22.144 "trsvcid": "4420" 00:11:22.144 }, 00:11:22.144 "peer_address": { 00:11:22.144 "trtype": "TCP", 00:11:22.144 "adrfam": "IPv4", 00:11:22.144 "traddr": "10.0.0.1", 00:11:22.144 "trsvcid": "39978" 00:11:22.144 }, 00:11:22.144 "auth": { 00:11:22.144 "state": "completed", 00:11:22.144 "digest": "sha384", 00:11:22.144 "dhgroup": "ffdhe6144" 00:11:22.144 } 00:11:22.144 } 00:11:22.144 ]' 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.144 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.403 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:22.403 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:23.346 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:23.635 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.636 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.906 00:11:23.906 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.906 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.906 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.165 { 00:11:24.165 "cntlid": 87, 00:11:24.165 "qid": 0, 00:11:24.165 "state": "enabled", 00:11:24.165 "thread": "nvmf_tgt_poll_group_000", 00:11:24.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:24.165 "listen_address": { 00:11:24.165 "trtype": "TCP", 00:11:24.165 "adrfam": "IPv4", 00:11:24.165 "traddr": "10.0.0.3", 00:11:24.165 "trsvcid": "4420" 00:11:24.165 }, 00:11:24.165 "peer_address": { 00:11:24.165 "trtype": "TCP", 00:11:24.165 "adrfam": "IPv4", 00:11:24.165 "traddr": "10.0.0.1", 00:11:24.165 "trsvcid": "40002" 00:11:24.165 }, 00:11:24.165 "auth": { 00:11:24.165 "state": "completed", 00:11:24.165 "digest": "sha384", 00:11:24.165 "dhgroup": "ffdhe6144" 00:11:24.165 } 00:11:24.165 } 00:11:24.165 ]' 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.165 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.424 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:24.424 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.424 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.424 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.424 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.683 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:24.683 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.251 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.510 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.446 00:11:26.446 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.446 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.446 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.447 { 00:11:26.447 "cntlid": 89, 00:11:26.447 "qid": 0, 00:11:26.447 "state": "enabled", 00:11:26.447 "thread": "nvmf_tgt_poll_group_000", 00:11:26.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:26.447 "listen_address": { 00:11:26.447 "trtype": "TCP", 00:11:26.447 "adrfam": "IPv4", 00:11:26.447 "traddr": "10.0.0.3", 00:11:26.447 "trsvcid": "4420" 00:11:26.447 }, 00:11:26.447 "peer_address": { 00:11:26.447 "trtype": "TCP", 00:11:26.447 "adrfam": "IPv4", 00:11:26.447 "traddr": "10.0.0.1", 00:11:26.447 "trsvcid": "40022" 00:11:26.447 }, 00:11:26.447 "auth": { 00:11:26.447 "state": "completed", 00:11:26.447 "digest": "sha384", 00:11:26.447 "dhgroup": "ffdhe8192" 00:11:26.447 } 00:11:26.447 } 00:11:26.447 ]' 00:11:26.447 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.705 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.963 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:26.963 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:27.539 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.107 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.675 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.675 { 00:11:28.675 "cntlid": 91, 00:11:28.675 "qid": 0, 00:11:28.675 "state": "enabled", 00:11:28.675 "thread": "nvmf_tgt_poll_group_000", 00:11:28.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:28.675 "listen_address": { 00:11:28.675 "trtype": "TCP", 00:11:28.675 "adrfam": "IPv4", 00:11:28.675 "traddr": "10.0.0.3", 00:11:28.675 "trsvcid": "4420" 00:11:28.675 }, 00:11:28.675 "peer_address": { 00:11:28.675 "trtype": "TCP", 00:11:28.675 "adrfam": "IPv4", 00:11:28.675 "traddr": "10.0.0.1", 00:11:28.675 "trsvcid": "38354" 00:11:28.675 }, 00:11:28.675 "auth": { 00:11:28.675 "state": "completed", 00:11:28.675 "digest": "sha384", 00:11:28.675 "dhgroup": "ffdhe8192" 00:11:28.675 } 00:11:28.675 } 00:11:28.675 ]' 00:11:28.675 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.934 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.193 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:29.193 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:29.761 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.020 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.957 00:11:30.957 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.957 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.957 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.216 { 00:11:31.216 "cntlid": 93, 00:11:31.216 "qid": 0, 00:11:31.216 "state": "enabled", 00:11:31.216 "thread": "nvmf_tgt_poll_group_000", 00:11:31.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:31.216 "listen_address": { 00:11:31.216 "trtype": "TCP", 00:11:31.216 "adrfam": "IPv4", 00:11:31.216 "traddr": "10.0.0.3", 00:11:31.216 "trsvcid": "4420" 00:11:31.216 }, 00:11:31.216 "peer_address": { 00:11:31.216 "trtype": "TCP", 00:11:31.216 "adrfam": "IPv4", 00:11:31.216 "traddr": "10.0.0.1", 00:11:31.216 "trsvcid": "38370" 00:11:31.216 }, 00:11:31.216 "auth": { 00:11:31.216 "state": "completed", 00:11:31.216 "digest": "sha384", 00:11:31.216 "dhgroup": "ffdhe8192" 00:11:31.216 } 00:11:31.216 } 00:11:31.216 ]' 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.216 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.475 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:31.475 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:32.410 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.410 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:32.410 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.411 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.411 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.411 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.411 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.411 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:32.669 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.670 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.670 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.670 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.670 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.670 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.236 00:11:33.495 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.495 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.495 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.753 { 00:11:33.753 "cntlid": 95, 00:11:33.753 "qid": 0, 00:11:33.753 "state": "enabled", 00:11:33.753 "thread": "nvmf_tgt_poll_group_000", 00:11:33.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:33.753 "listen_address": { 00:11:33.753 "trtype": "TCP", 00:11:33.753 "adrfam": "IPv4", 00:11:33.753 "traddr": "10.0.0.3", 00:11:33.753 "trsvcid": "4420" 00:11:33.753 }, 00:11:33.753 "peer_address": { 00:11:33.753 "trtype": "TCP", 00:11:33.753 "adrfam": "IPv4", 00:11:33.753 "traddr": "10.0.0.1", 00:11:33.753 "trsvcid": "38398" 00:11:33.753 }, 00:11:33.753 "auth": { 00:11:33.753 "state": "completed", 00:11:33.753 "digest": "sha384", 00:11:33.753 "dhgroup": "ffdhe8192" 00:11:33.753 } 00:11:33.753 } 00:11:33.753 ]' 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.753 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.321 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:34.321 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:34.893 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.152 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.718 00:11:35.719 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.719 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.719 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.977 { 00:11:35.977 "cntlid": 97, 00:11:35.977 "qid": 0, 00:11:35.977 "state": "enabled", 00:11:35.977 "thread": "nvmf_tgt_poll_group_000", 00:11:35.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:35.977 "listen_address": { 00:11:35.977 "trtype": "TCP", 00:11:35.977 "adrfam": "IPv4", 00:11:35.977 "traddr": "10.0.0.3", 00:11:35.977 "trsvcid": "4420" 00:11:35.977 }, 00:11:35.977 "peer_address": { 00:11:35.977 "trtype": "TCP", 00:11:35.977 "adrfam": "IPv4", 00:11:35.977 "traddr": "10.0.0.1", 00:11:35.977 "trsvcid": "38432" 00:11:35.977 }, 00:11:35.977 "auth": { 00:11:35.977 "state": "completed", 00:11:35.977 "digest": "sha512", 00:11:35.977 "dhgroup": "null" 00:11:35.977 } 00:11:35.977 } 00:11:35.977 ]' 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.977 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.543 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:36.543 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:37.110 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:37.368 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:37.368 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.368 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.369 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.935 00:11:37.935 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.935 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.935 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.194 { 00:11:38.194 "cntlid": 99, 00:11:38.194 "qid": 0, 00:11:38.194 "state": "enabled", 00:11:38.194 "thread": "nvmf_tgt_poll_group_000", 00:11:38.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:38.194 "listen_address": { 00:11:38.194 "trtype": "TCP", 00:11:38.194 "adrfam": "IPv4", 00:11:38.194 "traddr": "10.0.0.3", 00:11:38.194 "trsvcid": "4420" 00:11:38.194 }, 00:11:38.194 "peer_address": { 00:11:38.194 "trtype": "TCP", 00:11:38.194 "adrfam": "IPv4", 00:11:38.194 "traddr": "10.0.0.1", 00:11:38.194 "trsvcid": "53564" 00:11:38.194 }, 00:11:38.194 "auth": { 00:11:38.194 "state": "completed", 00:11:38.194 "digest": "sha512", 00:11:38.194 "dhgroup": "null" 00:11:38.194 } 00:11:38.194 } 00:11:38.194 ]' 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.194 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.452 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:38.452 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.387 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.645 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.903 00:11:39.903 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.903 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.903 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.160 { 00:11:40.160 "cntlid": 101, 00:11:40.160 "qid": 0, 00:11:40.160 "state": "enabled", 00:11:40.160 "thread": "nvmf_tgt_poll_group_000", 00:11:40.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:40.160 "listen_address": { 00:11:40.160 "trtype": "TCP", 00:11:40.160 "adrfam": "IPv4", 00:11:40.160 "traddr": "10.0.0.3", 00:11:40.160 "trsvcid": "4420" 00:11:40.160 }, 00:11:40.160 "peer_address": { 00:11:40.160 "trtype": "TCP", 00:11:40.160 "adrfam": "IPv4", 00:11:40.160 "traddr": "10.0.0.1", 00:11:40.160 "trsvcid": "53580" 00:11:40.160 }, 00:11:40.160 "auth": { 00:11:40.160 "state": "completed", 00:11:40.160 "digest": "sha512", 00:11:40.160 "dhgroup": "null" 00:11:40.160 } 00:11:40.160 } 00:11:40.160 ]' 00:11:40.160 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.419 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.419 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.419 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:40.419 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.419 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.419 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.419 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.677 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:40.677 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:41.242 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.242 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.807 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.064 00:11:42.064 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.064 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.064 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.322 { 00:11:42.322 "cntlid": 103, 00:11:42.322 "qid": 0, 00:11:42.322 "state": "enabled", 00:11:42.322 "thread": "nvmf_tgt_poll_group_000", 00:11:42.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:42.322 "listen_address": { 00:11:42.322 "trtype": "TCP", 00:11:42.322 "adrfam": "IPv4", 00:11:42.322 "traddr": "10.0.0.3", 00:11:42.322 "trsvcid": "4420" 00:11:42.322 }, 00:11:42.322 "peer_address": { 00:11:42.322 "trtype": "TCP", 00:11:42.322 "adrfam": "IPv4", 00:11:42.322 "traddr": "10.0.0.1", 00:11:42.322 "trsvcid": "53606" 00:11:42.322 }, 00:11:42.322 "auth": { 00:11:42.322 "state": "completed", 00:11:42.322 "digest": "sha512", 00:11:42.322 "dhgroup": "null" 00:11:42.322 } 00:11:42.322 } 00:11:42.322 ]' 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.322 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.322 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.322 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.322 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.322 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.322 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.580 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:42.580 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:43.513 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.513 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.514 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.514 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.514 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.514 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.514 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.514 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.080 00:11:44.080 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.080 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.080 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.339 { 00:11:44.339 "cntlid": 105, 00:11:44.339 "qid": 0, 00:11:44.339 "state": "enabled", 00:11:44.339 "thread": "nvmf_tgt_poll_group_000", 00:11:44.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:44.339 "listen_address": { 00:11:44.339 "trtype": "TCP", 00:11:44.339 "adrfam": "IPv4", 00:11:44.339 "traddr": "10.0.0.3", 00:11:44.339 "trsvcid": "4420" 00:11:44.339 }, 00:11:44.339 "peer_address": { 00:11:44.339 "trtype": "TCP", 00:11:44.339 "adrfam": "IPv4", 00:11:44.339 "traddr": "10.0.0.1", 00:11:44.339 "trsvcid": "53624" 00:11:44.339 }, 00:11:44.339 "auth": { 00:11:44.339 "state": "completed", 00:11:44.339 "digest": "sha512", 00:11:44.339 "dhgroup": "ffdhe2048" 00:11:44.339 } 00:11:44.339 } 00:11:44.339 ]' 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:44.339 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.339 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.339 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.339 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.598 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:44.598 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.532 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.532 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.142 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.142 { 00:11:46.142 "cntlid": 107, 00:11:46.142 "qid": 0, 00:11:46.142 "state": "enabled", 00:11:46.142 "thread": "nvmf_tgt_poll_group_000", 00:11:46.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:46.142 "listen_address": { 00:11:46.142 "trtype": "TCP", 00:11:46.142 "adrfam": "IPv4", 00:11:46.142 "traddr": "10.0.0.3", 00:11:46.142 "trsvcid": "4420" 00:11:46.142 }, 00:11:46.142 "peer_address": { 00:11:46.142 "trtype": "TCP", 00:11:46.142 "adrfam": "IPv4", 00:11:46.142 "traddr": "10.0.0.1", 00:11:46.142 "trsvcid": "53664" 00:11:46.142 }, 00:11:46.142 "auth": { 00:11:46.142 "state": "completed", 00:11:46.142 "digest": "sha512", 00:11:46.142 "dhgroup": "ffdhe2048" 00:11:46.142 } 00:11:46.142 } 00:11:46.142 ]' 00:11:46.142 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.401 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.401 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.401 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.401 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.401 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.401 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.401 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.659 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:46.659 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:47.225 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.225 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:47.225 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.225 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.482 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.482 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.482 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:47.482 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.739 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.740 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.740 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.997 00:11:47.997 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.997 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.997 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.256 { 00:11:48.256 "cntlid": 109, 00:11:48.256 "qid": 0, 00:11:48.256 "state": "enabled", 00:11:48.256 "thread": "nvmf_tgt_poll_group_000", 00:11:48.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:48.256 "listen_address": { 00:11:48.256 "trtype": "TCP", 00:11:48.256 "adrfam": "IPv4", 00:11:48.256 "traddr": "10.0.0.3", 00:11:48.256 "trsvcid": "4420" 00:11:48.256 }, 00:11:48.256 "peer_address": { 00:11:48.256 "trtype": "TCP", 00:11:48.256 "adrfam": "IPv4", 00:11:48.256 "traddr": "10.0.0.1", 00:11:48.256 "trsvcid": "59624" 00:11:48.256 }, 00:11:48.256 "auth": { 00:11:48.256 "state": "completed", 00:11:48.256 "digest": "sha512", 00:11:48.256 "dhgroup": "ffdhe2048" 00:11:48.256 } 00:11:48.256 } 00:11:48.256 ]' 00:11:48.256 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.256 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.256 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.514 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.514 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.514 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.514 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.514 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.772 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:48.772 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.707 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.965 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.224 00:11:50.224 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.224 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.224 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.483 { 00:11:50.483 "cntlid": 111, 00:11:50.483 "qid": 0, 00:11:50.483 "state": "enabled", 00:11:50.483 "thread": "nvmf_tgt_poll_group_000", 00:11:50.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:50.483 "listen_address": { 00:11:50.483 "trtype": "TCP", 00:11:50.483 "adrfam": "IPv4", 00:11:50.483 "traddr": "10.0.0.3", 00:11:50.483 "trsvcid": "4420" 00:11:50.483 }, 00:11:50.483 "peer_address": { 00:11:50.483 "trtype": "TCP", 00:11:50.483 "adrfam": "IPv4", 00:11:50.483 "traddr": "10.0.0.1", 00:11:50.483 "trsvcid": "59652" 00:11:50.483 }, 00:11:50.483 "auth": { 00:11:50.483 "state": "completed", 00:11:50.483 "digest": "sha512", 00:11:50.483 "dhgroup": "ffdhe2048" 00:11:50.483 } 00:11:50.483 } 00:11:50.483 ]' 00:11:50.483 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.741 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.999 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:51.000 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.934 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.935 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.193 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.193 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.193 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.193 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.451 00:11:52.451 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.451 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.451 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.709 { 00:11:52.709 "cntlid": 113, 00:11:52.709 "qid": 0, 00:11:52.709 "state": "enabled", 00:11:52.709 "thread": "nvmf_tgt_poll_group_000", 00:11:52.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:52.709 "listen_address": { 00:11:52.709 "trtype": "TCP", 00:11:52.709 "adrfam": "IPv4", 00:11:52.709 "traddr": "10.0.0.3", 00:11:52.709 "trsvcid": "4420" 00:11:52.709 }, 00:11:52.709 "peer_address": { 00:11:52.709 "trtype": "TCP", 00:11:52.709 "adrfam": "IPv4", 00:11:52.709 "traddr": "10.0.0.1", 00:11:52.709 "trsvcid": "59674" 00:11:52.709 }, 00:11:52.709 "auth": { 00:11:52.709 "state": "completed", 00:11:52.709 "digest": "sha512", 00:11:52.709 "dhgroup": "ffdhe3072" 00:11:52.709 } 00:11:52.709 } 00:11:52.709 ]' 00:11:52.709 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.968 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.226 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:53.226 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.161 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.418 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:54.418 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.418 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.418 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.418 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.419 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.676 00:11:54.677 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.677 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.677 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.934 { 00:11:54.934 "cntlid": 115, 00:11:54.934 "qid": 0, 00:11:54.934 "state": "enabled", 00:11:54.934 "thread": "nvmf_tgt_poll_group_000", 00:11:54.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:54.934 "listen_address": { 00:11:54.934 "trtype": "TCP", 00:11:54.934 "adrfam": "IPv4", 00:11:54.934 "traddr": "10.0.0.3", 00:11:54.934 "trsvcid": "4420" 00:11:54.934 }, 00:11:54.934 "peer_address": { 00:11:54.934 "trtype": "TCP", 00:11:54.934 "adrfam": "IPv4", 00:11:54.934 "traddr": "10.0.0.1", 00:11:54.934 "trsvcid": "59696" 00:11:54.934 }, 00:11:54.934 "auth": { 00:11:54.934 "state": "completed", 00:11:54.934 "digest": "sha512", 00:11:54.934 "dhgroup": "ffdhe3072" 00:11:54.934 } 00:11:54.934 } 00:11:54.934 ]' 00:11:54.934 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.193 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.451 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:55.451 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.385 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.644 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.903 00:11:56.903 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.903 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.903 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.163 { 00:11:57.163 "cntlid": 117, 00:11:57.163 "qid": 0, 00:11:57.163 "state": "enabled", 00:11:57.163 "thread": "nvmf_tgt_poll_group_000", 00:11:57.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:57.163 "listen_address": { 00:11:57.163 "trtype": "TCP", 00:11:57.163 "adrfam": "IPv4", 00:11:57.163 "traddr": "10.0.0.3", 00:11:57.163 "trsvcid": "4420" 00:11:57.163 }, 00:11:57.163 "peer_address": { 00:11:57.163 "trtype": "TCP", 00:11:57.163 "adrfam": "IPv4", 00:11:57.163 "traddr": "10.0.0.1", 00:11:57.163 "trsvcid": "59714" 00:11:57.163 }, 00:11:57.163 "auth": { 00:11:57.163 "state": "completed", 00:11:57.163 "digest": "sha512", 00:11:57.163 "dhgroup": "ffdhe3072" 00:11:57.163 } 00:11:57.163 } 00:11:57.163 ]' 00:11:57.163 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.435 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.435 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.435 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.435 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.435 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.435 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.435 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.693 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:57.693 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.630 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.197 00:11:59.197 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.197 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.197 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.456 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.456 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.456 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.456 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.456 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.456 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.456 { 00:11:59.456 "cntlid": 119, 00:11:59.456 "qid": 0, 00:11:59.456 "state": "enabled", 00:11:59.456 "thread": "nvmf_tgt_poll_group_000", 00:11:59.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:11:59.456 "listen_address": { 00:11:59.456 "trtype": "TCP", 00:11:59.456 "adrfam": "IPv4", 00:11:59.456 "traddr": "10.0.0.3", 00:11:59.456 "trsvcid": "4420" 00:11:59.456 }, 00:11:59.456 "peer_address": { 00:11:59.456 "trtype": "TCP", 00:11:59.456 "adrfam": "IPv4", 00:11:59.456 "traddr": "10.0.0.1", 00:11:59.456 "trsvcid": "41796" 00:11:59.456 }, 00:11:59.456 "auth": { 00:11:59.456 "state": "completed", 00:11:59.456 "digest": "sha512", 00:11:59.456 "dhgroup": "ffdhe3072" 00:11:59.456 } 00:11:59.456 } 00:11:59.456 ]' 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.456 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.715 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:11:59.715 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.651 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.909 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:00.909 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.909 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.910 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.168 00:12:01.168 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.168 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.168 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.427 { 00:12:01.427 "cntlid": 121, 00:12:01.427 "qid": 0, 00:12:01.427 "state": "enabled", 00:12:01.427 "thread": "nvmf_tgt_poll_group_000", 00:12:01.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:01.427 "listen_address": { 00:12:01.427 "trtype": "TCP", 00:12:01.427 "adrfam": "IPv4", 00:12:01.427 "traddr": "10.0.0.3", 00:12:01.427 "trsvcid": "4420" 00:12:01.427 }, 00:12:01.427 "peer_address": { 00:12:01.427 "trtype": "TCP", 00:12:01.427 "adrfam": "IPv4", 00:12:01.427 "traddr": "10.0.0.1", 00:12:01.427 "trsvcid": "41832" 00:12:01.427 }, 00:12:01.427 "auth": { 00:12:01.427 "state": "completed", 00:12:01.427 "digest": "sha512", 00:12:01.427 "dhgroup": "ffdhe4096" 00:12:01.427 } 00:12:01.427 } 00:12:01.427 ]' 00:12:01.427 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.686 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.945 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:01.945 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.512 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.080 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.340 00:12:03.340 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.340 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.340 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.598 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.599 { 00:12:03.599 "cntlid": 123, 00:12:03.599 "qid": 0, 00:12:03.599 "state": "enabled", 00:12:03.599 "thread": "nvmf_tgt_poll_group_000", 00:12:03.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:03.599 "listen_address": { 00:12:03.599 "trtype": "TCP", 00:12:03.599 "adrfam": "IPv4", 00:12:03.599 "traddr": "10.0.0.3", 00:12:03.599 "trsvcid": "4420" 00:12:03.599 }, 00:12:03.599 "peer_address": { 00:12:03.599 "trtype": "TCP", 00:12:03.599 "adrfam": "IPv4", 00:12:03.599 "traddr": "10.0.0.1", 00:12:03.599 "trsvcid": "41844" 00:12:03.599 }, 00:12:03.599 "auth": { 00:12:03.599 "state": "completed", 00:12:03.599 "digest": "sha512", 00:12:03.599 "dhgroup": "ffdhe4096" 00:12:03.599 } 00:12:03.599 } 00:12:03.599 ]' 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.599 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.858 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.858 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.858 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.858 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.858 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.117 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:12:04.117 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.054 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.621 00:12:05.621 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.621 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.621 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.880 { 00:12:05.880 "cntlid": 125, 00:12:05.880 "qid": 0, 00:12:05.880 "state": "enabled", 00:12:05.880 "thread": "nvmf_tgt_poll_group_000", 00:12:05.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:05.880 "listen_address": { 00:12:05.880 "trtype": "TCP", 00:12:05.880 "adrfam": "IPv4", 00:12:05.880 "traddr": "10.0.0.3", 00:12:05.880 "trsvcid": "4420" 00:12:05.880 }, 00:12:05.880 "peer_address": { 00:12:05.880 "trtype": "TCP", 00:12:05.880 "adrfam": "IPv4", 00:12:05.880 "traddr": "10.0.0.1", 00:12:05.880 "trsvcid": "41858" 00:12:05.880 }, 00:12:05.880 "auth": { 00:12:05.880 "state": "completed", 00:12:05.880 "digest": "sha512", 00:12:05.880 "dhgroup": "ffdhe4096" 00:12:05.880 } 00:12:05.880 } 00:12:05.880 ]' 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.880 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.138 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.139 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.139 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.397 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:12:06.397 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:06.964 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.223 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.489 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.489 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.489 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.489 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.748 00:12:07.748 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.748 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.748 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.025 { 00:12:08.025 "cntlid": 127, 00:12:08.025 "qid": 0, 00:12:08.025 "state": "enabled", 00:12:08.025 "thread": "nvmf_tgt_poll_group_000", 00:12:08.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:08.025 "listen_address": { 00:12:08.025 "trtype": "TCP", 00:12:08.025 "adrfam": "IPv4", 00:12:08.025 "traddr": "10.0.0.3", 00:12:08.025 "trsvcid": "4420" 00:12:08.025 }, 00:12:08.025 "peer_address": { 00:12:08.025 "trtype": "TCP", 00:12:08.025 "adrfam": "IPv4", 00:12:08.025 "traddr": "10.0.0.1", 00:12:08.025 "trsvcid": "32884" 00:12:08.025 }, 00:12:08.025 "auth": { 00:12:08.025 "state": "completed", 00:12:08.025 "digest": "sha512", 00:12:08.025 "dhgroup": "ffdhe4096" 00:12:08.025 } 00:12:08.025 } 00:12:08.025 ]' 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.025 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.306 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.306 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.306 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.565 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:08.565 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.132 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.391 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.958 00:12:09.958 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.959 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.959 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.217 { 00:12:10.217 "cntlid": 129, 00:12:10.217 "qid": 0, 00:12:10.217 "state": "enabled", 00:12:10.217 "thread": "nvmf_tgt_poll_group_000", 00:12:10.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:10.217 "listen_address": { 00:12:10.217 "trtype": "TCP", 00:12:10.217 "adrfam": "IPv4", 00:12:10.217 "traddr": "10.0.0.3", 00:12:10.217 "trsvcid": "4420" 00:12:10.217 }, 00:12:10.217 "peer_address": { 00:12:10.217 "trtype": "TCP", 00:12:10.217 "adrfam": "IPv4", 00:12:10.217 "traddr": "10.0.0.1", 00:12:10.217 "trsvcid": "32906" 00:12:10.217 }, 00:12:10.217 "auth": { 00:12:10.217 "state": "completed", 00:12:10.217 "digest": "sha512", 00:12:10.217 "dhgroup": "ffdhe6144" 00:12:10.217 } 00:12:10.217 } 00:12:10.217 ]' 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.217 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.218 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.218 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.218 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.476 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:10.476 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:11.412 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.413 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.671 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.672 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.930 00:12:11.930 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.930 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.930 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.190 { 00:12:12.190 "cntlid": 131, 00:12:12.190 "qid": 0, 00:12:12.190 "state": "enabled", 00:12:12.190 "thread": "nvmf_tgt_poll_group_000", 00:12:12.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:12.190 "listen_address": { 00:12:12.190 "trtype": "TCP", 00:12:12.190 "adrfam": "IPv4", 00:12:12.190 "traddr": "10.0.0.3", 00:12:12.190 "trsvcid": "4420" 00:12:12.190 }, 00:12:12.190 "peer_address": { 00:12:12.190 "trtype": "TCP", 00:12:12.190 "adrfam": "IPv4", 00:12:12.190 "traddr": "10.0.0.1", 00:12:12.190 "trsvcid": "32932" 00:12:12.190 }, 00:12:12.190 "auth": { 00:12:12.190 "state": "completed", 00:12:12.190 "digest": "sha512", 00:12:12.190 "dhgroup": "ffdhe6144" 00:12:12.190 } 00:12:12.190 } 00:12:12.190 ]' 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.190 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.449 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.449 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.449 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.449 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.449 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.708 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:12:12.708 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:12:13.275 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.275 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:13.275 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.275 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.533 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.533 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.533 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.533 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.792 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.052 00:12:14.052 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.052 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.052 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.311 { 00:12:14.311 "cntlid": 133, 00:12:14.311 "qid": 0, 00:12:14.311 "state": "enabled", 00:12:14.311 "thread": "nvmf_tgt_poll_group_000", 00:12:14.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:14.311 "listen_address": { 00:12:14.311 "trtype": "TCP", 00:12:14.311 "adrfam": "IPv4", 00:12:14.311 "traddr": "10.0.0.3", 00:12:14.311 "trsvcid": "4420" 00:12:14.311 }, 00:12:14.311 "peer_address": { 00:12:14.311 "trtype": "TCP", 00:12:14.311 "adrfam": "IPv4", 00:12:14.311 "traddr": "10.0.0.1", 00:12:14.311 "trsvcid": "32958" 00:12:14.311 }, 00:12:14.311 "auth": { 00:12:14.311 "state": "completed", 00:12:14.311 "digest": "sha512", 00:12:14.311 "dhgroup": "ffdhe6144" 00:12:14.311 } 00:12:14.311 } 00:12:14.311 ]' 00:12:14.311 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.570 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.829 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:12:14.829 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.397 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.656 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.327 00:12:16.327 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.327 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.327 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.327 { 00:12:16.327 "cntlid": 135, 00:12:16.327 "qid": 0, 00:12:16.327 "state": "enabled", 00:12:16.327 "thread": "nvmf_tgt_poll_group_000", 00:12:16.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:16.327 "listen_address": { 00:12:16.327 "trtype": "TCP", 00:12:16.327 "adrfam": "IPv4", 00:12:16.327 "traddr": "10.0.0.3", 00:12:16.327 "trsvcid": "4420" 00:12:16.327 }, 00:12:16.327 "peer_address": { 00:12:16.327 "trtype": "TCP", 00:12:16.327 "adrfam": "IPv4", 00:12:16.327 "traddr": "10.0.0.1", 00:12:16.327 "trsvcid": "32976" 00:12:16.327 }, 00:12:16.327 "auth": { 00:12:16.327 "state": "completed", 00:12:16.327 "digest": "sha512", 00:12:16.327 "dhgroup": "ffdhe6144" 00:12:16.327 } 00:12:16.327 } 00:12:16.327 ]' 00:12:16.327 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.589 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.848 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:16.848 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.422 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.692 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.262 00:12:18.521 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.521 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.521 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.779 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.779 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.779 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.779 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.779 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.780 { 00:12:18.780 "cntlid": 137, 00:12:18.780 "qid": 0, 00:12:18.780 "state": "enabled", 00:12:18.780 "thread": "nvmf_tgt_poll_group_000", 00:12:18.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:18.780 "listen_address": { 00:12:18.780 "trtype": "TCP", 00:12:18.780 "adrfam": "IPv4", 00:12:18.780 "traddr": "10.0.0.3", 00:12:18.780 "trsvcid": "4420" 00:12:18.780 }, 00:12:18.780 "peer_address": { 00:12:18.780 "trtype": "TCP", 00:12:18.780 "adrfam": "IPv4", 00:12:18.780 "traddr": "10.0.0.1", 00:12:18.780 "trsvcid": "56106" 00:12:18.780 }, 00:12:18.780 "auth": { 00:12:18.780 "state": "completed", 00:12:18.780 "digest": "sha512", 00:12:18.780 "dhgroup": "ffdhe8192" 00:12:18.780 } 00:12:18.780 } 00:12:18.780 ]' 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.780 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.038 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:19.038 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:19.605 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.864 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:19.864 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.864 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.864 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.865 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.865 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.865 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.124 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.692 00:12:20.692 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.692 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.692 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.951 { 00:12:20.951 "cntlid": 139, 00:12:20.951 "qid": 0, 00:12:20.951 "state": "enabled", 00:12:20.951 "thread": "nvmf_tgt_poll_group_000", 00:12:20.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:20.951 "listen_address": { 00:12:20.951 "trtype": "TCP", 00:12:20.951 "adrfam": "IPv4", 00:12:20.951 "traddr": "10.0.0.3", 00:12:20.951 "trsvcid": "4420" 00:12:20.951 }, 00:12:20.951 "peer_address": { 00:12:20.951 "trtype": "TCP", 00:12:20.951 "adrfam": "IPv4", 00:12:20.951 "traddr": "10.0.0.1", 00:12:20.951 "trsvcid": "56126" 00:12:20.951 }, 00:12:20.951 "auth": { 00:12:20.951 "state": "completed", 00:12:20.951 "digest": "sha512", 00:12:20.951 "dhgroup": "ffdhe8192" 00:12:20.951 } 00:12:20.951 } 00:12:20.951 ]' 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.951 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.210 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:21.210 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.210 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.210 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.210 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.468 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:12:21.468 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: --dhchap-ctrl-secret DHHC-1:02:MDUwOWU5ODRjYjk2OWM5YWZhNjE5YmE0ZjNlMDhkY2Y3ZTc5YjcxNDBkMmNmNjBk5F675w==: 00:12:22.036 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.036 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:22.036 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.036 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.295 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.295 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:22.295 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.553 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.121 00:12:23.121 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.121 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.121 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.379 { 00:12:23.379 "cntlid": 141, 00:12:23.379 "qid": 0, 00:12:23.379 "state": "enabled", 00:12:23.379 "thread": "nvmf_tgt_poll_group_000", 00:12:23.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:23.379 "listen_address": { 00:12:23.379 "trtype": "TCP", 00:12:23.379 "adrfam": "IPv4", 00:12:23.379 "traddr": "10.0.0.3", 00:12:23.379 "trsvcid": "4420" 00:12:23.379 }, 00:12:23.379 "peer_address": { 00:12:23.379 "trtype": "TCP", 00:12:23.379 "adrfam": "IPv4", 00:12:23.379 "traddr": "10.0.0.1", 00:12:23.379 "trsvcid": "56152" 00:12:23.379 }, 00:12:23.379 "auth": { 00:12:23.379 "state": "completed", 00:12:23.379 "digest": "sha512", 00:12:23.379 "dhgroup": "ffdhe8192" 00:12:23.379 } 00:12:23.379 } 00:12:23.379 ]' 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.379 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.638 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.638 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.638 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.638 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.638 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.897 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:12:23.897 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:01:MDQ1OGRlYWMyNmNlNGUwZTgyMjYwYzU2OTQ3Mzc2ODdMCEx4: 00:12:24.465 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.723 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.982 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.550 00:12:25.550 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.550 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.550 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.808 { 00:12:25.808 "cntlid": 143, 00:12:25.808 "qid": 0, 00:12:25.808 "state": "enabled", 00:12:25.808 "thread": "nvmf_tgt_poll_group_000", 00:12:25.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:25.808 "listen_address": { 00:12:25.808 "trtype": "TCP", 00:12:25.808 "adrfam": "IPv4", 00:12:25.808 "traddr": "10.0.0.3", 00:12:25.808 "trsvcid": "4420" 00:12:25.808 }, 00:12:25.808 "peer_address": { 00:12:25.808 "trtype": "TCP", 00:12:25.808 "adrfam": "IPv4", 00:12:25.808 "traddr": "10.0.0.1", 00:12:25.808 "trsvcid": "56182" 00:12:25.808 }, 00:12:25.808 "auth": { 00:12:25.808 "state": "completed", 00:12:25.808 "digest": "sha512", 00:12:25.808 "dhgroup": "ffdhe8192" 00:12:25.808 } 00:12:25.808 } 00:12:25.808 ]' 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.808 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.067 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.067 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.067 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.326 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:26.326 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:26.894 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.466 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.035 00:12:28.035 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.035 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.035 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.294 { 00:12:28.294 "cntlid": 145, 00:12:28.294 "qid": 0, 00:12:28.294 "state": "enabled", 00:12:28.294 "thread": "nvmf_tgt_poll_group_000", 00:12:28.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:28.294 "listen_address": { 00:12:28.294 "trtype": "TCP", 00:12:28.294 "adrfam": "IPv4", 00:12:28.294 "traddr": "10.0.0.3", 00:12:28.294 "trsvcid": "4420" 00:12:28.294 }, 00:12:28.294 "peer_address": { 00:12:28.294 "trtype": "TCP", 00:12:28.294 "adrfam": "IPv4", 00:12:28.294 "traddr": "10.0.0.1", 00:12:28.294 "trsvcid": "43320" 00:12:28.294 }, 00:12:28.294 "auth": { 00:12:28.294 "state": "completed", 00:12:28.294 "digest": "sha512", 00:12:28.294 "dhgroup": "ffdhe8192" 00:12:28.294 } 00:12:28.294 } 00:12:28.294 ]' 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.294 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.294 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.294 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.294 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.553 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:28.553 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:00:YzFjZjhmY2VhODRjZTMxOWNlOTFiMjRjMzNmY2ZkZTcyNTZmZWMyZTgwYzg0NTU2+ElJGA==: --dhchap-ctrl-secret DHHC-1:03:OGZlODk3OTA5ZGJmYmNhYjg3YzFkN2ZiYWZkZTBhNTczYTc2MTZjZTlhYTIzNDZiZWU2NzM5YmM0ZGI3MTY2MmRsj98=: 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:29.490 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:30.058 request: 00:12:30.058 { 00:12:30.058 "name": "nvme0", 00:12:30.058 "trtype": "tcp", 00:12:30.058 "traddr": "10.0.0.3", 00:12:30.058 "adrfam": "ipv4", 00:12:30.058 "trsvcid": "4420", 00:12:30.058 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:30.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:30.058 "prchk_reftag": false, 00:12:30.058 "prchk_guard": false, 00:12:30.058 "hdgst": false, 00:12:30.058 "ddgst": false, 00:12:30.058 "dhchap_key": "key2", 00:12:30.058 "allow_unrecognized_csi": false, 00:12:30.058 "method": "bdev_nvme_attach_controller", 00:12:30.058 "req_id": 1 00:12:30.058 } 00:12:30.058 Got JSON-RPC error response 00:12:30.058 response: 00:12:30.058 { 00:12:30.058 "code": -5, 00:12:30.058 "message": "Input/output error" 00:12:30.058 } 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.058 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.627 request: 00:12:30.627 { 00:12:30.627 "name": "nvme0", 00:12:30.627 "trtype": "tcp", 00:12:30.627 "traddr": "10.0.0.3", 00:12:30.627 "adrfam": "ipv4", 00:12:30.627 "trsvcid": "4420", 00:12:30.627 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:30.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:30.627 "prchk_reftag": false, 00:12:30.627 "prchk_guard": false, 00:12:30.627 "hdgst": false, 00:12:30.627 "ddgst": false, 00:12:30.627 "dhchap_key": "key1", 00:12:30.627 "dhchap_ctrlr_key": "ckey2", 00:12:30.627 "allow_unrecognized_csi": false, 00:12:30.627 "method": "bdev_nvme_attach_controller", 00:12:30.627 "req_id": 1 00:12:30.627 } 00:12:30.627 Got JSON-RPC error response 00:12:30.627 response: 00:12:30.627 { 00:12:30.627 "code": -5, 00:12:30.627 "message": "Input/output error" 00:12:30.627 } 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.627 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.199 request: 00:12:31.199 { 00:12:31.199 "name": "nvme0", 00:12:31.199 "trtype": "tcp", 00:12:31.199 "traddr": "10.0.0.3", 00:12:31.199 "adrfam": "ipv4", 00:12:31.199 "trsvcid": "4420", 00:12:31.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:31.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:31.199 "prchk_reftag": false, 00:12:31.199 "prchk_guard": false, 00:12:31.199 "hdgst": false, 00:12:31.199 "ddgst": false, 00:12:31.199 "dhchap_key": "key1", 00:12:31.199 "dhchap_ctrlr_key": "ckey1", 00:12:31.199 "allow_unrecognized_csi": false, 00:12:31.199 "method": "bdev_nvme_attach_controller", 00:12:31.199 "req_id": 1 00:12:31.199 } 00:12:31.199 Got JSON-RPC error response 00:12:31.199 response: 00:12:31.199 { 00:12:31.199 "code": -5, 00:12:31.199 "message": "Input/output error" 00:12:31.199 } 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 68079 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68079 ']' 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68079 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68079 00:12:31.199 killing process with pid 68079 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68079' 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68079 00:12:31.199 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68079 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=71165 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 71165 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71165 ']' 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.495 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 71165 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71165 ']' 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.754 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.013 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.013 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:32.013 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:32.013 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.013 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 null0 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.R5K 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.gOC ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gOC 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.160 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.LaH ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LaH 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O7B 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.O2g ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O2g 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ApX 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.272 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.209 nvme0n1 00:12:33.209 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.209 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.209 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.467 { 00:12:33.467 "cntlid": 1, 00:12:33.467 "qid": 0, 00:12:33.467 "state": "enabled", 00:12:33.467 "thread": "nvmf_tgt_poll_group_000", 00:12:33.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:33.467 "listen_address": { 00:12:33.467 "trtype": "TCP", 00:12:33.467 "adrfam": "IPv4", 00:12:33.467 "traddr": "10.0.0.3", 00:12:33.467 "trsvcid": "4420" 00:12:33.467 }, 00:12:33.467 "peer_address": { 00:12:33.467 "trtype": "TCP", 00:12:33.467 "adrfam": "IPv4", 00:12:33.467 "traddr": "10.0.0.1", 00:12:33.467 "trsvcid": "43388" 00:12:33.467 }, 00:12:33.467 "auth": { 00:12:33.467 "state": "completed", 00:12:33.467 "digest": "sha512", 00:12:33.467 "dhgroup": "ffdhe8192" 00:12:33.467 } 00:12:33.467 } 00:12:33.467 ]' 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.467 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.726 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.726 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.726 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.726 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.726 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.985 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:33.985 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:34.552 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.552 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:34.552 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.552 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key3 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:34.811 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.070 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.328 request: 00:12:35.328 { 00:12:35.328 "name": "nvme0", 00:12:35.328 "trtype": "tcp", 00:12:35.328 "traddr": "10.0.0.3", 00:12:35.328 "adrfam": "ipv4", 00:12:35.328 "trsvcid": "4420", 00:12:35.328 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:35.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:35.328 "prchk_reftag": false, 00:12:35.328 "prchk_guard": false, 00:12:35.328 "hdgst": false, 00:12:35.328 "ddgst": false, 00:12:35.328 "dhchap_key": "key3", 00:12:35.328 "allow_unrecognized_csi": false, 00:12:35.328 "method": "bdev_nvme_attach_controller", 00:12:35.328 "req_id": 1 00:12:35.328 } 00:12:35.328 Got JSON-RPC error response 00:12:35.328 response: 00:12:35.328 { 00:12:35.328 "code": -5, 00:12:35.328 "message": "Input/output error" 00:12:35.328 } 00:12:35.328 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:35.329 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:35.587 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:35.587 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:35.587 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:35.587 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:35.587 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.587 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:35.588 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.588 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.588 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.588 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.846 request: 00:12:35.846 { 00:12:35.846 "name": "nvme0", 00:12:35.846 "trtype": "tcp", 00:12:35.846 "traddr": "10.0.0.3", 00:12:35.846 "adrfam": "ipv4", 00:12:35.846 "trsvcid": "4420", 00:12:35.846 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:35.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:35.846 "prchk_reftag": false, 00:12:35.846 "prchk_guard": false, 00:12:35.846 "hdgst": false, 00:12:35.846 "ddgst": false, 00:12:35.846 "dhchap_key": "key3", 00:12:35.846 "allow_unrecognized_csi": false, 00:12:35.846 "method": "bdev_nvme_attach_controller", 00:12:35.846 "req_id": 1 00:12:35.846 } 00:12:35.846 Got JSON-RPC error response 00:12:35.846 response: 00:12:35.846 { 00:12:35.846 "code": -5, 00:12:35.846 "message": "Input/output error" 00:12:35.846 } 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:35.846 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.847 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.847 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:36.105 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.106 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:36.106 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:36.106 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:36.364 request: 00:12:36.364 { 00:12:36.364 "name": "nvme0", 00:12:36.364 "trtype": "tcp", 00:12:36.364 "traddr": "10.0.0.3", 00:12:36.364 "adrfam": "ipv4", 00:12:36.364 "trsvcid": "4420", 00:12:36.364 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:36.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:36.364 "prchk_reftag": false, 00:12:36.364 "prchk_guard": false, 00:12:36.364 "hdgst": false, 00:12:36.364 "ddgst": false, 00:12:36.364 "dhchap_key": "key0", 00:12:36.364 "dhchap_ctrlr_key": "key1", 00:12:36.364 "allow_unrecognized_csi": false, 00:12:36.364 "method": "bdev_nvme_attach_controller", 00:12:36.364 "req_id": 1 00:12:36.364 } 00:12:36.364 Got JSON-RPC error response 00:12:36.364 response: 00:12:36.364 { 00:12:36.364 "code": -5, 00:12:36.364 "message": "Input/output error" 00:12:36.364 } 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:36.623 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:36.882 nvme0n1 00:12:36.882 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:36.882 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.882 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:37.140 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.140 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.140 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:37.404 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:38.341 nvme0n1 00:12:38.341 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:38.341 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:38.341 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:38.600 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.858 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.858 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:38.858 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid 19057b12-55d1-482d-ac95-8c26bd7da4ce -l 0 --dhchap-secret DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: --dhchap-ctrl-secret DHHC-1:03:NjIwZDIxMmFiYmY1OWY5NmNlNjEwMWNlMGYxM2VkYzEwYzMxMzcwZGZjMThjODAwNzJjNGNhOTk1ZThhYzkxMCSNi6k=: 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.795 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:40.053 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:40.620 request: 00:12:40.620 { 00:12:40.620 "name": "nvme0", 00:12:40.620 "trtype": "tcp", 00:12:40.620 "traddr": "10.0.0.3", 00:12:40.620 "adrfam": "ipv4", 00:12:40.620 "trsvcid": "4420", 00:12:40.620 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:40.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce", 00:12:40.620 "prchk_reftag": false, 00:12:40.620 "prchk_guard": false, 00:12:40.620 "hdgst": false, 00:12:40.620 "ddgst": false, 00:12:40.620 "dhchap_key": "key1", 00:12:40.620 "allow_unrecognized_csi": false, 00:12:40.620 "method": "bdev_nvme_attach_controller", 00:12:40.620 "req_id": 1 00:12:40.620 } 00:12:40.620 Got JSON-RPC error response 00:12:40.620 response: 00:12:40.620 { 00:12:40.620 "code": -5, 00:12:40.620 "message": "Input/output error" 00:12:40.620 } 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:40.620 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:41.556 nvme0n1 00:12:41.556 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:41.556 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:41.556 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.815 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.815 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.815 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:42.076 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:42.352 nvme0n1 00:12:42.352 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:42.352 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.352 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:42.610 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.610 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.610 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: '' 2s 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:42.868 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: ]] 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDYzYTViN2JmNDM5YmY3ODg3ZjA1ZjlkNTgxZmJkOTAXE4xE: 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:42.869 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: 2s 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: ]] 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjQwY2ExN2VkZmYyYmYzYWE5ZmI5OWUyODcwODc2NTVhZGIxNGFmMDQwMDU3NWIxE90cfg==: 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:45.402 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:47.306 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:48.241 nvme0n1 00:12:48.241 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:48.241 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.241 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.241 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.241 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:48.241 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:48.808 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:48.808 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:48.808 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:49.066 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:49.325 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:49.325 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.325 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:49.583 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.583 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:49.584 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:50.151 request: 00:12:50.151 { 00:12:50.151 "name": "nvme0", 00:12:50.151 "dhchap_key": "key1", 00:12:50.151 "dhchap_ctrlr_key": "key3", 00:12:50.151 "method": "bdev_nvme_set_keys", 00:12:50.151 "req_id": 1 00:12:50.151 } 00:12:50.151 Got JSON-RPC error response 00:12:50.151 response: 00:12:50.151 { 00:12:50.151 "code": -13, 00:12:50.151 "message": "Permission denied" 00:12:50.151 } 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.151 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:50.410 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:50.410 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:51.345 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:51.345 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:51.345 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:51.603 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:51.604 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:52.539 nvme0n1 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:52.539 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:53.474 request: 00:12:53.474 { 00:12:53.474 "name": "nvme0", 00:12:53.474 "dhchap_key": "key2", 00:12:53.474 "dhchap_ctrlr_key": "key0", 00:12:53.474 "method": "bdev_nvme_set_keys", 00:12:53.474 "req_id": 1 00:12:53.474 } 00:12:53.474 Got JSON-RPC error response 00:12:53.474 response: 00:12:53.474 { 00:12:53.474 "code": -13, 00:12:53.474 "message": "Permission denied" 00:12:53.474 } 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:53.474 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.474 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:53.474 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68109 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68109 ']' 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68109 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68109 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:54.850 killing process with pid 68109 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68109' 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68109 00:12:54.850 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68109 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.108 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.108 rmmod nvme_tcp 00:12:55.108 rmmod nvme_fabrics 00:12:55.367 rmmod nvme_keyring 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 71165 ']' 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 71165 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71165 ']' 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71165 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71165 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.367 killing process with pid 71165 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71165' 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71165 00:12:55.367 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71165 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.367 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.R5K /tmp/spdk.key-sha256.160 /tmp/spdk.key-sha384.O7B /tmp/spdk.key-sha512.ApX /tmp/spdk.key-sha512.gOC /tmp/spdk.key-sha384.LaH /tmp/spdk.key-sha256.O2g '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:55.636 00:12:55.636 real 3m10.404s 00:12:55.636 user 7m36.431s 00:12:55.636 sys 0m29.685s 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 ************************************ 00:12:55.636 END TEST nvmf_auth_target 00:12:55.636 ************************************ 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.636 08:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.934 ************************************ 00:12:55.934 START TEST nvmf_bdevio_no_huge 00:12:55.934 ************************************ 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:55.934 * Looking for test storage... 00:12:55.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.934 --rc genhtml_branch_coverage=1 00:12:55.934 --rc genhtml_function_coverage=1 00:12:55.934 --rc genhtml_legend=1 00:12:55.934 --rc geninfo_all_blocks=1 00:12:55.934 --rc geninfo_unexecuted_blocks=1 00:12:55.934 00:12:55.934 ' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.934 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.935 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:55.935 Cannot find device "nvmf_init_br" 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:55.935 Cannot find device "nvmf_init_br2" 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:55.935 Cannot find device "nvmf_tgt_br" 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.935 Cannot find device "nvmf_tgt_br2" 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:55.935 Cannot find device "nvmf_init_br" 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:55.935 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:56.195 Cannot find device "nvmf_init_br2" 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:56.195 Cannot find device "nvmf_tgt_br" 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:56.195 Cannot find device "nvmf_tgt_br2" 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:56.195 Cannot find device "nvmf_br" 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:56.195 Cannot find device "nvmf_init_if" 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:56.195 Cannot find device "nvmf_init_if2" 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:12:56.195 00:12:56.195 --- 10.0.0.3 ping statistics --- 00:12:56.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.195 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:56.195 00:12:56.195 --- 10.0.0.4 ping statistics --- 00:12:56.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.195 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:56.195 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:56.195 00:12:56.195 --- 10.0.0.1 ping statistics --- 00:12:56.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.195 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:56.455 00:12:56.455 --- 10.0.0.2 ping statistics --- 00:12:56.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.455 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.455 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71793 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71793 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71793 ']' 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.455 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.455 [2024-12-11 08:47:04.061351] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:12:56.455 [2024-12-11 08:47:04.061477] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:56.455 [2024-12-11 08:47:04.225716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.714 [2024-12-11 08:47:04.285426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.714 [2024-12-11 08:47:04.285479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.714 [2024-12-11 08:47:04.285491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.714 [2024-12-11 08:47:04.285499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.714 [2024-12-11 08:47:04.285507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.714 [2024-12-11 08:47:04.286351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:12:56.714 [2024-12-11 08:47:04.286488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:12:56.714 [2024-12-11 08:47:04.286625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:12:56.714 [2024-12-11 08:47:04.286872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.714 [2024-12-11 08:47:04.291677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.714 [2024-12-11 08:47:04.455574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.714 Malloc0 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.714 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.973 [2024-12-11 08:47:04.499770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:56.973 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:56.973 { 00:12:56.973 "params": { 00:12:56.973 "name": "Nvme$subsystem", 00:12:56.973 "trtype": "$TEST_TRANSPORT", 00:12:56.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.974 "adrfam": "ipv4", 00:12:56.974 "trsvcid": "$NVMF_PORT", 00:12:56.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.974 "hdgst": ${hdgst:-false}, 00:12:56.974 "ddgst": ${ddgst:-false} 00:12:56.974 }, 00:12:56.974 "method": "bdev_nvme_attach_controller" 00:12:56.974 } 00:12:56.974 EOF 00:12:56.974 )") 00:12:56.974 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:56.974 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:56.974 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:56.974 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:56.974 "params": { 00:12:56.974 "name": "Nvme1", 00:12:56.974 "trtype": "tcp", 00:12:56.974 "traddr": "10.0.0.3", 00:12:56.974 "adrfam": "ipv4", 00:12:56.974 "trsvcid": "4420", 00:12:56.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.974 "hdgst": false, 00:12:56.974 "ddgst": false 00:12:56.974 }, 00:12:56.974 "method": "bdev_nvme_attach_controller" 00:12:56.974 }' 00:12:56.974 [2024-12-11 08:47:04.560269] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:12:56.974 [2024-12-11 08:47:04.560373] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71827 ] 00:12:56.974 [2024-12-11 08:47:04.721209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.233 [2024-12-11 08:47:04.796171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.233 [2024-12-11 08:47:04.796323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.233 [2024-12-11 08:47:04.796331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.233 [2024-12-11 08:47:04.810432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:57.497 I/O targets: 00:12:57.497 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:57.497 00:12:57.497 00:12:57.497 CUnit - A unit testing framework for C - Version 2.1-3 00:12:57.497 http://cunit.sourceforge.net/ 00:12:57.497 00:12:57.497 00:12:57.497 Suite: bdevio tests on: Nvme1n1 00:12:57.497 Test: blockdev write read block ...passed 00:12:57.497 Test: blockdev write zeroes read block ...passed 00:12:57.497 Test: blockdev write zeroes read no split ...passed 00:12:57.497 Test: blockdev write zeroes read split ...passed 00:12:57.497 Test: blockdev write zeroes read split partial ...passed 00:12:57.497 Test: blockdev reset ...[2024-12-11 08:47:05.041546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:57.497 [2024-12-11 08:47:05.041648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c720 (9): Bad file descriptor 00:12:57.497 [2024-12-11 08:47:05.059270] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:57.497 passed 00:12:57.497 Test: blockdev write read 8 blocks ...passed 00:12:57.497 Test: blockdev write read size > 128k ...passed 00:12:57.497 Test: blockdev write read invalid size ...passed 00:12:57.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:57.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:57.497 Test: blockdev write read max offset ...passed 00:12:57.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:57.497 Test: blockdev writev readv 8 blocks ...passed 00:12:57.497 Test: blockdev writev readv 30 x 1block ...passed 00:12:57.497 Test: blockdev writev readv block ...passed 00:12:57.497 Test: blockdev writev readv size > 128k ...passed 00:12:57.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:57.497 Test: blockdev comparev and writev ...[2024-12-11 08:47:05.067277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.067330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.067357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.067371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.067876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.067905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.067934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.067947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.068262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.068284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.068304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.068317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.068594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.068615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.068635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:57.497 [2024-12-11 08:47:05.068648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:57.497 passed 00:12:57.497 Test: blockdev nvme passthru rw ...passed 00:12:57.497 Test: blockdev nvme passthru vendor specific ...[2024-12-11 08:47:05.069479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.497 [2024-12-11 08:47:05.069510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.069624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.497 [2024-12-11 08:47:05.069644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:57.497 [2024-12-11 08:47:05.069759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.497 [2024-12-11 08:47:05.069778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:57.497 passed 00:12:57.497 Test: blockdev nvme admin passthru ...[2024-12-11 08:47:05.069893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:57.497 [2024-12-11 08:47:05.069919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:57.497 passed 00:12:57.497 Test: blockdev copy ...passed 00:12:57.497 00:12:57.497 Run Summary: Type Total Ran Passed Failed Inactive 00:12:57.497 suites 1 1 n/a 0 0 00:12:57.497 tests 23 23 23 0 0 00:12:57.497 asserts 152 152 152 0 n/a 00:12:57.497 00:12:57.497 Elapsed time = 0.163 seconds 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.756 rmmod nvme_tcp 00:12:57.756 rmmod nvme_fabrics 00:12:57.756 rmmod nvme_keyring 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71793 ']' 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71793 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71793 ']' 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71793 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.756 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71793 00:12:58.016 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:58.016 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:58.016 killing process with pid 71793 00:12:58.016 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71793' 00:12:58.016 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71793 00:12:58.016 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71793 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:58.274 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:58.275 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:58.275 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:58.275 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:58.275 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:58.275 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:58.275 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:58.533 00:12:58.533 real 0m2.696s 00:12:58.533 user 0m7.566s 00:12:58.533 sys 0m1.244s 00:12:58.533 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.534 ************************************ 00:12:58.534 END TEST nvmf_bdevio_no_huge 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:58.534 ************************************ 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.534 ************************************ 00:12:58.534 START TEST nvmf_tls 00:12:58.534 ************************************ 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:58.534 * Looking for test storage... 00:12:58.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.534 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:58.793 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.794 --rc genhtml_branch_coverage=1 00:12:58.794 --rc genhtml_function_coverage=1 00:12:58.794 --rc genhtml_legend=1 00:12:58.794 --rc geninfo_all_blocks=1 00:12:58.794 --rc geninfo_unexecuted_blocks=1 00:12:58.794 00:12:58.794 ' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.794 --rc genhtml_branch_coverage=1 00:12:58.794 --rc genhtml_function_coverage=1 00:12:58.794 --rc genhtml_legend=1 00:12:58.794 --rc geninfo_all_blocks=1 00:12:58.794 --rc geninfo_unexecuted_blocks=1 00:12:58.794 00:12:58.794 ' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.794 --rc genhtml_branch_coverage=1 00:12:58.794 --rc genhtml_function_coverage=1 00:12:58.794 --rc genhtml_legend=1 00:12:58.794 --rc geninfo_all_blocks=1 00:12:58.794 --rc geninfo_unexecuted_blocks=1 00:12:58.794 00:12:58.794 ' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.794 --rc genhtml_branch_coverage=1 00:12:58.794 --rc genhtml_function_coverage=1 00:12:58.794 --rc genhtml_legend=1 00:12:58.794 --rc geninfo_all_blocks=1 00:12:58.794 --rc geninfo_unexecuted_blocks=1 00:12:58.794 00:12:58.794 ' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.794 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.795 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.795 Cannot find device "nvmf_init_br" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.795 Cannot find device "nvmf_init_br2" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.795 Cannot find device "nvmf_tgt_br" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.795 Cannot find device "nvmf_tgt_br2" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.795 Cannot find device "nvmf_init_br" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.795 Cannot find device "nvmf_init_br2" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.795 Cannot find device "nvmf_tgt_br" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.795 Cannot find device "nvmf_tgt_br2" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.795 Cannot find device "nvmf_br" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.795 Cannot find device "nvmf_init_if" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.795 Cannot find device "nvmf_init_if2" 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.795 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:59.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:12:59.054 00:12:59.054 --- 10.0.0.3 ping statistics --- 00:12:59.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.054 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:59.054 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:59.054 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:12:59.054 00:12:59.054 --- 10.0.0.4 ping statistics --- 00:12:59.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.054 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:59.054 00:12:59.054 --- 10.0.0.1 ping statistics --- 00:12:59.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.054 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:59.054 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:59.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:12:59.055 00:12:59.055 --- 10.0.0.2 ping statistics --- 00:12:59.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.055 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72059 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72059 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72059 ']' 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.055 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.314 [2024-12-11 08:47:06.855038] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:12:59.314 [2024-12-11 08:47:06.855372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.314 [2024-12-11 08:47:07.000487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.314 [2024-12-11 08:47:07.030570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.314 [2024-12-11 08:47:07.030618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.314 [2024-12-11 08:47:07.030646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.314 [2024-12-11 08:47:07.030653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.314 [2024-12-11 08:47:07.030659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.314 [2024-12-11 08:47:07.030924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:59.572 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:59.831 true 00:12:59.831 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:59.831 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.089 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:00.089 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:00.089 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:00.348 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.348 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:00.606 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:00.606 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:00.606 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:00.865 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.865 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:01.124 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:01.124 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:01.124 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:01.124 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:01.382 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:01.382 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:01.382 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:01.641 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:01.641 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:01.900 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:01.900 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:01.900 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:02.158 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:02.158 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:02.417 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.o9sjXPZUL1 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.cFF6l94Vh0 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.o9sjXPZUL1 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.cFF6l94Vh0 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:02.676 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:03.244 [2024-12-11 08:47:10.746860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:03.244 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.o9sjXPZUL1 00:13:03.244 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o9sjXPZUL1 00:13:03.244 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:03.244 [2024-12-11 08:47:11.006241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.501 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:03.760 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:03.760 [2024-12-11 08:47:11.518325] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:03.760 [2024-12-11 08:47:11.518570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:04.019 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:04.277 malloc0 00:13:04.277 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:04.537 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o9sjXPZUL1 00:13:04.799 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:05.058 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.o9sjXPZUL1 00:13:17.291 Initializing NVMe Controllers 00:13:17.291 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.291 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:17.291 Initialization complete. Launching workers. 00:13:17.291 ======================================================== 00:13:17.291 Latency(us) 00:13:17.291 Device Information : IOPS MiB/s Average min max 00:13:17.291 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9837.79 38.43 6506.92 1457.37 8867.84 00:13:17.291 ======================================================== 00:13:17.291 Total : 9837.79 38.43 6506.92 1457.37 8867.84 00:13:17.291 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9sjXPZUL1 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9sjXPZUL1 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72293 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72293 /var/tmp/bdevperf.sock 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72293 ']' 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.291 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.292 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.292 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.292 [2024-12-11 08:47:22.898289] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:17.292 [2024-12-11 08:47:22.898542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72293 ] 00:13:17.292 [2024-12-11 08:47:23.040443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.292 [2024-12-11 08:47:23.072378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.292 [2024-12-11 08:47:23.101881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.292 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.292 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:17.292 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9sjXPZUL1 00:13:17.292 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:17.292 [2024-12-11 08:47:23.737948] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:17.292 TLSTESTn1 00:13:17.292 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:17.292 Running I/O for 10 seconds... 00:13:18.234 4201.00 IOPS, 16.41 MiB/s [2024-12-11T08:47:26.943Z] 4234.00 IOPS, 16.54 MiB/s [2024-12-11T08:47:28.317Z] 4199.33 IOPS, 16.40 MiB/s [2024-12-11T08:47:29.252Z] 4163.50 IOPS, 16.26 MiB/s [2024-12-11T08:47:30.187Z] 4127.60 IOPS, 16.12 MiB/s [2024-12-11T08:47:31.123Z] 4101.50 IOPS, 16.02 MiB/s [2024-12-11T08:47:32.061Z] 4090.29 IOPS, 15.98 MiB/s [2024-12-11T08:47:33.009Z] 4080.12 IOPS, 15.94 MiB/s [2024-12-11T08:47:33.945Z] 4078.89 IOPS, 15.93 MiB/s [2024-12-11T08:47:34.205Z] 4076.50 IOPS, 15.92 MiB/s 00:13:26.431 Latency(us) 00:13:26.431 [2024-12-11T08:47:34.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.431 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:26.431 Verification LBA range: start 0x0 length 0x2000 00:13:26.431 TLSTESTn1 : 10.02 4082.43 15.95 0.00 0.00 31298.00 5332.25 22758.87 00:13:26.431 [2024-12-11T08:47:34.205Z] =================================================================================================================== 00:13:26.431 [2024-12-11T08:47:34.205Z] Total : 4082.43 15.95 0.00 0.00 31298.00 5332.25 22758.87 00:13:26.431 { 00:13:26.431 "results": [ 00:13:26.431 { 00:13:26.431 "job": "TLSTESTn1", 00:13:26.431 "core_mask": "0x4", 00:13:26.431 "workload": "verify", 00:13:26.431 "status": "finished", 00:13:26.431 "verify_range": { 00:13:26.431 "start": 0, 00:13:26.431 "length": 8192 00:13:26.431 }, 00:13:26.431 "queue_depth": 128, 00:13:26.431 "io_size": 4096, 00:13:26.431 "runtime": 10.015837, 00:13:26.431 "iops": 4082.4346482475703, 00:13:26.431 "mibps": 15.947010344717071, 00:13:26.431 "io_failed": 0, 00:13:26.431 "io_timeout": 0, 00:13:26.431 "avg_latency_us": 31298.004275344116, 00:13:26.431 "min_latency_us": 5332.2472727272725, 00:13:26.431 "max_latency_us": 22758.865454545456 00:13:26.431 } 00:13:26.431 ], 00:13:26.431 "core_count": 1 00:13:26.431 } 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72293 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72293 ']' 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72293 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.431 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72293 00:13:26.431 killing process with pid 72293 00:13:26.431 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.431 00:13:26.431 Latency(us) 00:13:26.431 [2024-12-11T08:47:34.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.431 [2024-12-11T08:47:34.205Z] =================================================================================================================== 00:13:26.431 [2024-12-11T08:47:34.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72293' 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72293 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72293 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFF6l94Vh0 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFF6l94Vh0 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cFF6l94Vh0 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cFF6l94Vh0 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72420 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72420 /var/tmp/bdevperf.sock 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72420 ']' 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.431 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.690 [2024-12-11 08:47:34.211388] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:26.691 [2024-12-11 08:47:34.211704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72420 ] 00:13:26.691 [2024-12-11 08:47:34.365563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.691 [2024-12-11 08:47:34.398476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.691 [2024-12-11 08:47:34.428511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:26.949 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.949 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:26.949 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cFF6l94Vh0 00:13:27.208 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:27.469 [2024-12-11 08:47:34.992894] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.469 [2024-12-11 08:47:34.997915] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:27.469 [2024-12-11 08:47:34.998548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc150 (107): Transport endpoint is not connected 00:13:27.469 [2024-12-11 08:47:34.999536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc150 (9): Bad file descriptor 00:13:27.469 [2024-12-11 08:47:35.000533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:27.469 [2024-12-11 08:47:35.000559] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:27.469 [2024-12-11 08:47:35.000587] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:27.469 [2024-12-11 08:47:35.000602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:27.469 request: 00:13:27.469 { 00:13:27.469 "name": "TLSTEST", 00:13:27.469 "trtype": "tcp", 00:13:27.469 "traddr": "10.0.0.3", 00:13:27.469 "adrfam": "ipv4", 00:13:27.469 "trsvcid": "4420", 00:13:27.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:27.470 "prchk_reftag": false, 00:13:27.470 "prchk_guard": false, 00:13:27.470 "hdgst": false, 00:13:27.470 "ddgst": false, 00:13:27.470 "psk": "key0", 00:13:27.470 "allow_unrecognized_csi": false, 00:13:27.470 "method": "bdev_nvme_attach_controller", 00:13:27.470 "req_id": 1 00:13:27.470 } 00:13:27.470 Got JSON-RPC error response 00:13:27.470 response: 00:13:27.470 { 00:13:27.470 "code": -5, 00:13:27.470 "message": "Input/output error" 00:13:27.470 } 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72420 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72420 ']' 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72420 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72420 00:13:27.470 killing process with pid 72420 00:13:27.470 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.470 00:13:27.470 Latency(us) 00:13:27.470 [2024-12-11T08:47:35.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.470 [2024-12-11T08:47:35.244Z] =================================================================================================================== 00:13:27.470 [2024-12-11T08:47:35.244Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72420' 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72420 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72420 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.o9sjXPZUL1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.o9sjXPZUL1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.o9sjXPZUL1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9sjXPZUL1 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72441 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72441 /var/tmp/bdevperf.sock 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72441 ']' 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.470 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.729 [2024-12-11 08:47:35.243981] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:27.729 [2024-12-11 08:47:35.244291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72441 ] 00:13:27.729 [2024-12-11 08:47:35.390017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.729 [2024-12-11 08:47:35.423413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.729 [2024-12-11 08:47:35.453217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.988 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.988 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:27.988 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9sjXPZUL1 00:13:28.246 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:28.246 [2024-12-11 08:47:36.012479] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.246 [2024-12-11 08:47:36.017447] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:28.247 [2024-12-11 08:47:36.017506] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:28.247 [2024-12-11 08:47:36.017577] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:28.247 [2024-12-11 08:47:36.018180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8f150 (107): Transport endpoint is not connected 00:13:28.247 [2024-12-11 08:47:36.019169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8f150 (9): Bad file descriptor 00:13:28.506 [2024-12-11 08:47:36.020176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:28.506 [2024-12-11 08:47:36.020349] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:28.506 [2024-12-11 08:47:36.020459] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:28.506 [2024-12-11 08:47:36.020600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:28.506 request: 00:13:28.506 { 00:13:28.506 "name": "TLSTEST", 00:13:28.506 "trtype": "tcp", 00:13:28.506 "traddr": "10.0.0.3", 00:13:28.506 "adrfam": "ipv4", 00:13:28.506 "trsvcid": "4420", 00:13:28.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.506 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:28.506 "prchk_reftag": false, 00:13:28.506 "prchk_guard": false, 00:13:28.506 "hdgst": false, 00:13:28.506 "ddgst": false, 00:13:28.506 "psk": "key0", 00:13:28.506 "allow_unrecognized_csi": false, 00:13:28.506 "method": "bdev_nvme_attach_controller", 00:13:28.506 "req_id": 1 00:13:28.506 } 00:13:28.506 Got JSON-RPC error response 00:13:28.506 response: 00:13:28.506 { 00:13:28.506 "code": -5, 00:13:28.506 "message": "Input/output error" 00:13:28.506 } 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72441 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72441 ']' 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72441 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72441 00:13:28.506 killing process with pid 72441 00:13:28.506 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.506 00:13:28.506 Latency(us) 00:13:28.506 [2024-12-11T08:47:36.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.506 [2024-12-11T08:47:36.280Z] =================================================================================================================== 00:13:28.506 [2024-12-11T08:47:36.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72441' 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72441 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72441 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9sjXPZUL1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9sjXPZUL1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9sjXPZUL1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9sjXPZUL1 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72462 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:28.506 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72462 /var/tmp/bdevperf.sock 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72462 ']' 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.507 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.507 [2024-12-11 08:47:36.258915] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:28.507 [2024-12-11 08:47:36.259196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72462 ] 00:13:28.766 [2024-12-11 08:47:36.404877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.766 [2024-12-11 08:47:36.437034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.766 [2024-12-11 08:47:36.466071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.766 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.766 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:28.766 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9sjXPZUL1 00:13:29.024 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:29.283 [2024-12-11 08:47:37.045686] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.542 [2024-12-11 08:47:37.055607] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:29.542 [2024-12-11 08:47:37.055828] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:29.542 [2024-12-11 08:47:37.055888] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:29.542 [2024-12-11 08:47:37.056322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b150 (107): Transport endpoint is not connected 00:13:29.542 [2024-12-11 08:47:37.057313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200b150 (9): Bad file descriptor 00:13:29.542 [2024-12-11 08:47:37.058312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:29.542 [2024-12-11 08:47:37.058339] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:29.542 [2024-12-11 08:47:37.058351] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:29.542 [2024-12-11 08:47:37.058366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:29.542 request: 00:13:29.542 { 00:13:29.542 "name": "TLSTEST", 00:13:29.542 "trtype": "tcp", 00:13:29.542 "traddr": "10.0.0.3", 00:13:29.542 "adrfam": "ipv4", 00:13:29.542 "trsvcid": "4420", 00:13:29.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:29.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.542 "prchk_reftag": false, 00:13:29.542 "prchk_guard": false, 00:13:29.542 "hdgst": false, 00:13:29.542 "ddgst": false, 00:13:29.542 "psk": "key0", 00:13:29.543 "allow_unrecognized_csi": false, 00:13:29.543 "method": "bdev_nvme_attach_controller", 00:13:29.543 "req_id": 1 00:13:29.543 } 00:13:29.543 Got JSON-RPC error response 00:13:29.543 response: 00:13:29.543 { 00:13:29.543 "code": -5, 00:13:29.543 "message": "Input/output error" 00:13:29.543 } 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72462 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72462 ']' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72462 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72462 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72462' 00:13:29.543 killing process with pid 72462 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72462 00:13:29.543 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.543 00:13:29.543 Latency(us) 00:13:29.543 [2024-12-11T08:47:37.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.543 [2024-12-11T08:47:37.317Z] =================================================================================================================== 00:13:29.543 [2024-12-11T08:47:37.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72462 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72483 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72483 /var/tmp/bdevperf.sock 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72483 ']' 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:29.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.543 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.543 [2024-12-11 08:47:37.300013] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:29.543 [2024-12-11 08:47:37.300305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72483 ] 00:13:29.802 [2024-12-11 08:47:37.446229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.802 [2024-12-11 08:47:37.477909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.802 [2024-12-11 08:47:37.508159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.802 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.802 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:29.802 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:30.062 [2024-12-11 08:47:37.797083] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:30.062 [2024-12-11 08:47:37.797350] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:30.062 request: 00:13:30.062 { 00:13:30.062 "name": "key0", 00:13:30.062 "path": "", 00:13:30.062 "method": "keyring_file_add_key", 00:13:30.062 "req_id": 1 00:13:30.062 } 00:13:30.062 Got JSON-RPC error response 00:13:30.062 response: 00:13:30.062 { 00:13:30.062 "code": -1, 00:13:30.062 "message": "Operation not permitted" 00:13:30.062 } 00:13:30.062 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:30.320 [2024-12-11 08:47:38.093314] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:30.320 [2024-12-11 08:47:38.093584] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:30.580 request: 00:13:30.580 { 00:13:30.580 "name": "TLSTEST", 00:13:30.580 "trtype": "tcp", 00:13:30.580 "traddr": "10.0.0.3", 00:13:30.580 "adrfam": "ipv4", 00:13:30.580 "trsvcid": "4420", 00:13:30.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.580 "prchk_reftag": false, 00:13:30.580 "prchk_guard": false, 00:13:30.580 "hdgst": false, 00:13:30.580 "ddgst": false, 00:13:30.580 "psk": "key0", 00:13:30.580 "allow_unrecognized_csi": false, 00:13:30.580 "method": "bdev_nvme_attach_controller", 00:13:30.580 "req_id": 1 00:13:30.580 } 00:13:30.580 Got JSON-RPC error response 00:13:30.580 response: 00:13:30.580 { 00:13:30.580 "code": -126, 00:13:30.580 "message": "Required key not available" 00:13:30.580 } 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72483 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72483 ']' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72483 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72483 00:13:30.580 killing process with pid 72483 00:13:30.580 Received shutdown signal, test time was about 10.000000 seconds 00:13:30.580 00:13:30.580 Latency(us) 00:13:30.580 [2024-12-11T08:47:38.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.580 [2024-12-11T08:47:38.354Z] =================================================================================================================== 00:13:30.580 [2024-12-11T08:47:38.354Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72483' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72483 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72483 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 72059 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72059 ']' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72059 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72059 00:13:30.580 killing process with pid 72059 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72059' 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72059 00:13:30.580 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72059 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qzb2ejOdP9 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qzb2ejOdP9 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72514 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72514 00:13:30.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72514 ']' 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.839 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.840 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.840 [2024-12-11 08:47:38.583558] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:30.840 [2024-12-11 08:47:38.584506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.099 [2024-12-11 08:47:38.732411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.099 [2024-12-11 08:47:38.760691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.099 [2024-12-11 08:47:38.760969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.099 [2024-12-11 08:47:38.761112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.099 [2024-12-11 08:47:38.761128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.099 [2024-12-11 08:47:38.761169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.099 [2024-12-11 08:47:38.761442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.099 [2024-12-11 08:47:38.792120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:31.099 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.099 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:31.099 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.099 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.099 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.358 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.358 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qzb2ejOdP9 00:13:31.358 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzb2ejOdP9 00:13:31.358 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:31.616 [2024-12-11 08:47:39.180745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.616 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:31.875 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:32.133 [2024-12-11 08:47:39.692846] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:32.133 [2024-12-11 08:47:39.693152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:32.133 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:32.392 malloc0 00:13:32.392 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:32.650 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:13:32.909 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzb2ejOdP9 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qzb2ejOdP9 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72562 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72562 /var/tmp/bdevperf.sock 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72562 ']' 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.168 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.168 [2024-12-11 08:47:40.850261] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:33.168 [2024-12-11 08:47:40.850473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72562 ] 00:13:33.427 [2024-12-11 08:47:40.998845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.427 [2024-12-11 08:47:41.039654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.427 [2024-12-11 08:47:41.073617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.427 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.427 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:33.427 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:13:33.686 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:33.945 [2024-12-11 08:47:41.608162] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:33.945 TLSTESTn1 00:13:33.945 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:34.203 Running I/O for 10 seconds... 00:13:36.083 4096.00 IOPS, 16.00 MiB/s [2024-12-11T08:47:45.235Z] 4160.00 IOPS, 16.25 MiB/s [2024-12-11T08:47:46.170Z] 4224.00 IOPS, 16.50 MiB/s [2024-12-11T08:47:47.107Z] 4245.75 IOPS, 16.58 MiB/s [2024-12-11T08:47:48.044Z] 4260.00 IOPS, 16.64 MiB/s [2024-12-11T08:47:48.980Z] 4287.50 IOPS, 16.75 MiB/s [2024-12-11T08:47:49.917Z] 4313.43 IOPS, 16.85 MiB/s [2024-12-11T08:47:50.853Z] 4333.50 IOPS, 16.93 MiB/s [2024-12-11T08:47:52.232Z] 4334.56 IOPS, 16.93 MiB/s [2024-12-11T08:47:52.232Z] 4335.40 IOPS, 16.94 MiB/s 00:13:44.458 Latency(us) 00:13:44.458 [2024-12-11T08:47:52.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.458 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:44.458 Verification LBA range: start 0x0 length 0x2000 00:13:44.458 TLSTESTn1 : 10.02 4340.99 16.96 0.00 0.00 29431.81 6136.55 30742.34 00:13:44.458 [2024-12-11T08:47:52.232Z] =================================================================================================================== 00:13:44.458 [2024-12-11T08:47:52.232Z] Total : 4340.99 16.96 0.00 0.00 29431.81 6136.55 30742.34 00:13:44.458 { 00:13:44.458 "results": [ 00:13:44.458 { 00:13:44.458 "job": "TLSTESTn1", 00:13:44.458 "core_mask": "0x4", 00:13:44.458 "workload": "verify", 00:13:44.458 "status": "finished", 00:13:44.458 "verify_range": { 00:13:44.458 "start": 0, 00:13:44.458 "length": 8192 00:13:44.458 }, 00:13:44.458 "queue_depth": 128, 00:13:44.458 "io_size": 4096, 00:13:44.458 "runtime": 10.016146, 00:13:44.458 "iops": 4340.991035873479, 00:13:44.458 "mibps": 16.956996233880776, 00:13:44.458 "io_failed": 0, 00:13:44.458 "io_timeout": 0, 00:13:44.458 "avg_latency_us": 29431.812624571383, 00:13:44.458 "min_latency_us": 6136.552727272728, 00:13:44.458 "max_latency_us": 30742.34181818182 00:13:44.458 } 00:13:44.458 ], 00:13:44.458 "core_count": 1 00:13:44.458 } 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72562 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72562 ']' 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72562 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72562 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72562' 00:13:44.458 killing process with pid 72562 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72562 00:13:44.458 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.458 00:13:44.458 Latency(us) 00:13:44.458 [2024-12-11T08:47:52.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.458 [2024-12-11T08:47:52.232Z] =================================================================================================================== 00:13:44.458 [2024-12-11T08:47:52.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.458 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72562 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qzb2ejOdP9 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzb2ejOdP9 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzb2ejOdP9 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzb2ejOdP9 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qzb2ejOdP9 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72691 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72691 /var/tmp/bdevperf.sock 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72691 ']' 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.458 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.458 [2024-12-11 08:47:52.111270] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:44.458 [2024-12-11 08:47:52.112073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72691 ] 00:13:44.717 [2024-12-11 08:47:52.252430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.717 [2024-12-11 08:47:52.281963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.717 [2024-12-11 08:47:52.311256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:44.717 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.717 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:44.717 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:13:44.976 [2024-12-11 08:47:52.615480] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qzb2ejOdP9': 0100666 00:13:44.976 [2024-12-11 08:47:52.615833] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:44.976 request: 00:13:44.976 { 00:13:44.976 "name": "key0", 00:13:44.976 "path": "/tmp/tmp.qzb2ejOdP9", 00:13:44.976 "method": "keyring_file_add_key", 00:13:44.976 "req_id": 1 00:13:44.976 } 00:13:44.976 Got JSON-RPC error response 00:13:44.976 response: 00:13:44.976 { 00:13:44.976 "code": -1, 00:13:44.976 "message": "Operation not permitted" 00:13:44.976 } 00:13:44.976 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:45.235 [2024-12-11 08:47:52.859642] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:45.235 [2024-12-11 08:47:52.859997] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:45.235 request: 00:13:45.235 { 00:13:45.235 "name": "TLSTEST", 00:13:45.235 "trtype": "tcp", 00:13:45.235 "traddr": "10.0.0.3", 00:13:45.235 "adrfam": "ipv4", 00:13:45.235 "trsvcid": "4420", 00:13:45.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.235 "prchk_reftag": false, 00:13:45.235 "prchk_guard": false, 00:13:45.235 "hdgst": false, 00:13:45.235 "ddgst": false, 00:13:45.235 "psk": "key0", 00:13:45.235 "allow_unrecognized_csi": false, 00:13:45.235 "method": "bdev_nvme_attach_controller", 00:13:45.235 "req_id": 1 00:13:45.235 } 00:13:45.235 Got JSON-RPC error response 00:13:45.235 response: 00:13:45.235 { 00:13:45.235 "code": -126, 00:13:45.235 "message": "Required key not available" 00:13:45.235 } 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72691 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72691 ']' 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72691 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72691 00:13:45.235 killing process with pid 72691 00:13:45.235 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.235 00:13:45.235 Latency(us) 00:13:45.235 [2024-12-11T08:47:53.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.235 [2024-12-11T08:47:53.009Z] =================================================================================================================== 00:13:45.235 [2024-12-11T08:47:53.009Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72691' 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72691 00:13:45.235 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72691 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72514 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72514 ']' 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72514 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72514 00:13:45.494 killing process with pid 72514 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72514' 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72514 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72514 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72722 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72722 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72722 ']' 00:13:45.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.494 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.754 [2024-12-11 08:47:53.266755] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:45.754 [2024-12-11 08:47:53.267027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.754 [2024-12-11 08:47:53.408255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.754 [2024-12-11 08:47:53.437500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.754 [2024-12-11 08:47:53.437794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.754 [2024-12-11 08:47:53.437831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.754 [2024-12-11 08:47:53.437839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.754 [2024-12-11 08:47:53.437846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.754 [2024-12-11 08:47:53.438173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.754 [2024-12-11 08:47:53.467589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.754 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.754 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:45.754 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.754 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.754 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qzb2ejOdP9 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qzb2ejOdP9 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.qzb2ejOdP9 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzb2ejOdP9 00:13:46.013 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:46.272 [2024-12-11 08:47:53.835297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.272 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:46.531 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:46.791 [2024-12-11 08:47:54.327439] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.791 [2024-12-11 08:47:54.327896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:46.791 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:47.050 malloc0 00:13:47.050 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:47.309 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:13:47.309 [2024-12-11 08:47:55.045224] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qzb2ejOdP9': 0100666 00:13:47.309 [2024-12-11 08:47:55.045266] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:47.309 request: 00:13:47.309 { 00:13:47.309 "name": "key0", 00:13:47.309 "path": "/tmp/tmp.qzb2ejOdP9", 00:13:47.309 "method": "keyring_file_add_key", 00:13:47.309 "req_id": 1 00:13:47.309 } 00:13:47.309 Got JSON-RPC error response 00:13:47.309 response: 00:13:47.309 { 00:13:47.309 "code": -1, 00:13:47.309 "message": "Operation not permitted" 00:13:47.309 } 00:13:47.309 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:47.567 [2024-12-11 08:47:55.305311] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:47.567 [2024-12-11 08:47:55.305379] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:47.567 request: 00:13:47.567 { 00:13:47.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.567 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.567 "psk": "key0", 00:13:47.567 "method": "nvmf_subsystem_add_host", 00:13:47.567 "req_id": 1 00:13:47.567 } 00:13:47.567 Got JSON-RPC error response 00:13:47.567 response: 00:13:47.567 { 00:13:47.567 "code": -32603, 00:13:47.567 "message": "Internal error" 00:13:47.567 } 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72722 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72722 ']' 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72722 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.567 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72722 00:13:47.826 killing process with pid 72722 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72722' 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72722 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72722 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qzb2ejOdP9 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72778 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72778 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72778 ']' 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.826 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.826 [2024-12-11 08:47:55.555804] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:47.826 [2024-12-11 08:47:55.556053] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.085 [2024-12-11 08:47:55.697947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.085 [2024-12-11 08:47:55.728800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.085 [2024-12-11 08:47:55.728855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.085 [2024-12-11 08:47:55.728883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.085 [2024-12-11 08:47:55.728891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.085 [2024-12-11 08:47:55.728898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.085 [2024-12-11 08:47:55.729203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.085 [2024-12-11 08:47:55.759315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qzb2ejOdP9 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzb2ejOdP9 00:13:48.085 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.351 [2024-12-11 08:47:56.075519] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.351 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:48.618 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:48.877 [2024-12-11 08:47:56.563642] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:48.877 [2024-12-11 08:47:56.564062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:48.877 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:49.136 malloc0 00:13:49.136 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.395 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:13:49.654 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72826 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72826 /var/tmp/bdevperf.sock 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72826 ']' 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.913 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.172 [2024-12-11 08:47:57.689987] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:50.172 [2024-12-11 08:47:57.690296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72826 ] 00:13:50.172 [2024-12-11 08:47:57.834258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.172 [2024-12-11 08:47:57.866637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.172 [2024-12-11 08:47:57.895020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:50.172 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.172 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:50.172 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:13:50.430 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:50.688 [2024-12-11 08:47:58.405752] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:50.947 TLSTESTn1 00:13:50.947 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:51.206 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:51.206 "subsystems": [ 00:13:51.206 { 00:13:51.206 "subsystem": "keyring", 00:13:51.206 "config": [ 00:13:51.206 { 00:13:51.206 "method": "keyring_file_add_key", 00:13:51.206 "params": { 00:13:51.206 "name": "key0", 00:13:51.206 "path": "/tmp/tmp.qzb2ejOdP9" 00:13:51.206 } 00:13:51.206 } 00:13:51.206 ] 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "subsystem": "iobuf", 00:13:51.206 "config": [ 00:13:51.206 { 00:13:51.206 "method": "iobuf_set_options", 00:13:51.206 "params": { 00:13:51.206 "small_pool_count": 8192, 00:13:51.206 "large_pool_count": 1024, 00:13:51.206 "small_bufsize": 8192, 00:13:51.206 "large_bufsize": 135168, 00:13:51.206 "enable_numa": false 00:13:51.206 } 00:13:51.206 } 00:13:51.206 ] 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "subsystem": "sock", 00:13:51.206 "config": [ 00:13:51.206 { 00:13:51.206 "method": "sock_set_default_impl", 00:13:51.206 "params": { 00:13:51.206 "impl_name": "uring" 00:13:51.206 } 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "method": "sock_impl_set_options", 00:13:51.206 "params": { 00:13:51.206 "impl_name": "ssl", 00:13:51.206 "recv_buf_size": 4096, 00:13:51.206 "send_buf_size": 4096, 00:13:51.206 "enable_recv_pipe": true, 00:13:51.206 "enable_quickack": false, 00:13:51.206 "enable_placement_id": 0, 00:13:51.206 "enable_zerocopy_send_server": true, 00:13:51.206 "enable_zerocopy_send_client": false, 00:13:51.206 "zerocopy_threshold": 0, 00:13:51.206 "tls_version": 0, 00:13:51.206 "enable_ktls": false 00:13:51.206 } 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "method": "sock_impl_set_options", 00:13:51.206 "params": { 00:13:51.206 "impl_name": "posix", 00:13:51.206 "recv_buf_size": 2097152, 00:13:51.206 "send_buf_size": 2097152, 00:13:51.206 "enable_recv_pipe": true, 00:13:51.206 "enable_quickack": false, 00:13:51.206 "enable_placement_id": 0, 00:13:51.206 "enable_zerocopy_send_server": true, 00:13:51.206 "enable_zerocopy_send_client": false, 00:13:51.206 "zerocopy_threshold": 0, 00:13:51.206 "tls_version": 0, 00:13:51.206 "enable_ktls": false 00:13:51.206 } 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "method": "sock_impl_set_options", 00:13:51.206 "params": { 00:13:51.206 "impl_name": "uring", 00:13:51.206 "recv_buf_size": 2097152, 00:13:51.206 "send_buf_size": 2097152, 00:13:51.206 "enable_recv_pipe": true, 00:13:51.206 "enable_quickack": false, 00:13:51.206 "enable_placement_id": 0, 00:13:51.206 "enable_zerocopy_send_server": false, 00:13:51.206 "enable_zerocopy_send_client": false, 00:13:51.206 "zerocopy_threshold": 0, 00:13:51.206 "tls_version": 0, 00:13:51.206 "enable_ktls": false 00:13:51.206 } 00:13:51.206 } 00:13:51.206 ] 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "subsystem": "vmd", 00:13:51.206 "config": [] 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "subsystem": "accel", 00:13:51.206 "config": [ 00:13:51.206 { 00:13:51.206 "method": "accel_set_options", 00:13:51.206 "params": { 00:13:51.206 "small_cache_size": 128, 00:13:51.206 "large_cache_size": 16, 00:13:51.207 "task_count": 2048, 00:13:51.207 "sequence_count": 2048, 00:13:51.207 "buf_count": 2048 00:13:51.207 } 00:13:51.207 } 00:13:51.207 ] 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "subsystem": "bdev", 00:13:51.207 "config": [ 00:13:51.207 { 00:13:51.207 "method": "bdev_set_options", 00:13:51.207 "params": { 00:13:51.207 "bdev_io_pool_size": 65535, 00:13:51.207 "bdev_io_cache_size": 256, 00:13:51.207 "bdev_auto_examine": true, 00:13:51.207 "iobuf_small_cache_size": 128, 00:13:51.207 "iobuf_large_cache_size": 16 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "bdev_raid_set_options", 00:13:51.207 "params": { 00:13:51.207 "process_window_size_kb": 1024, 00:13:51.207 "process_max_bandwidth_mb_sec": 0 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "bdev_iscsi_set_options", 00:13:51.207 "params": { 00:13:51.207 "timeout_sec": 30 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "bdev_nvme_set_options", 00:13:51.207 "params": { 00:13:51.207 "action_on_timeout": "none", 00:13:51.207 "timeout_us": 0, 00:13:51.207 "timeout_admin_us": 0, 00:13:51.207 "keep_alive_timeout_ms": 10000, 00:13:51.207 "arbitration_burst": 0, 00:13:51.207 "low_priority_weight": 0, 00:13:51.207 "medium_priority_weight": 0, 00:13:51.207 "high_priority_weight": 0, 00:13:51.207 "nvme_adminq_poll_period_us": 10000, 00:13:51.207 "nvme_ioq_poll_period_us": 0, 00:13:51.207 "io_queue_requests": 0, 00:13:51.207 "delay_cmd_submit": true, 00:13:51.207 "transport_retry_count": 4, 00:13:51.207 "bdev_retry_count": 3, 00:13:51.207 "transport_ack_timeout": 0, 00:13:51.207 "ctrlr_loss_timeout_sec": 0, 00:13:51.207 "reconnect_delay_sec": 0, 00:13:51.207 "fast_io_fail_timeout_sec": 0, 00:13:51.207 "disable_auto_failback": false, 00:13:51.207 "generate_uuids": false, 00:13:51.207 "transport_tos": 0, 00:13:51.207 "nvme_error_stat": false, 00:13:51.207 "rdma_srq_size": 0, 00:13:51.207 "io_path_stat": false, 00:13:51.207 "allow_accel_sequence": false, 00:13:51.207 "rdma_max_cq_size": 0, 00:13:51.207 "rdma_cm_event_timeout_ms": 0, 00:13:51.207 "dhchap_digests": [ 00:13:51.207 "sha256", 00:13:51.207 "sha384", 00:13:51.207 "sha512" 00:13:51.207 ], 00:13:51.207 "dhchap_dhgroups": [ 00:13:51.207 "null", 00:13:51.207 "ffdhe2048", 00:13:51.207 "ffdhe3072", 00:13:51.207 "ffdhe4096", 00:13:51.207 "ffdhe6144", 00:13:51.207 "ffdhe8192" 00:13:51.207 ], 00:13:51.207 "rdma_umr_per_io": false 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "bdev_nvme_set_hotplug", 00:13:51.207 "params": { 00:13:51.207 "period_us": 100000, 00:13:51.207 "enable": false 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "bdev_malloc_create", 00:13:51.207 "params": { 00:13:51.207 "name": "malloc0", 00:13:51.207 "num_blocks": 8192, 00:13:51.207 "block_size": 4096, 00:13:51.207 "physical_block_size": 4096, 00:13:51.207 "uuid": "86dc926a-744d-4db9-a23f-d3bc5a8d8b8b", 00:13:51.207 "optimal_io_boundary": 0, 00:13:51.207 "md_size": 0, 00:13:51.207 "dif_type": 0, 00:13:51.207 "dif_is_head_of_md": false, 00:13:51.207 "dif_pi_format": 0 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "bdev_wait_for_examine" 00:13:51.207 } 00:13:51.207 ] 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "subsystem": "nbd", 00:13:51.207 "config": [] 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "subsystem": "scheduler", 00:13:51.207 "config": [ 00:13:51.207 { 00:13:51.207 "method": "framework_set_scheduler", 00:13:51.207 "params": { 00:13:51.207 "name": "static" 00:13:51.207 } 00:13:51.207 } 00:13:51.207 ] 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "subsystem": "nvmf", 00:13:51.207 "config": [ 00:13:51.207 { 00:13:51.207 "method": "nvmf_set_config", 00:13:51.207 "params": { 00:13:51.207 "discovery_filter": "match_any", 00:13:51.207 "admin_cmd_passthru": { 00:13:51.207 "identify_ctrlr": false 00:13:51.207 }, 00:13:51.207 "dhchap_digests": [ 00:13:51.207 "sha256", 00:13:51.207 "sha384", 00:13:51.207 "sha512" 00:13:51.207 ], 00:13:51.207 "dhchap_dhgroups": [ 00:13:51.207 "null", 00:13:51.207 "ffdhe2048", 00:13:51.207 "ffdhe3072", 00:13:51.207 "ffdhe4096", 00:13:51.207 "ffdhe6144", 00:13:51.207 "ffdhe8192" 00:13:51.207 ] 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_set_max_subsystems", 00:13:51.207 "params": { 00:13:51.207 "max_subsystems": 1024 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_set_crdt", 00:13:51.207 "params": { 00:13:51.207 "crdt1": 0, 00:13:51.207 "crdt2": 0, 00:13:51.207 "crdt3": 0 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_create_transport", 00:13:51.207 "params": { 00:13:51.207 "trtype": "TCP", 00:13:51.207 "max_queue_depth": 128, 00:13:51.207 "max_io_qpairs_per_ctrlr": 127, 00:13:51.207 "in_capsule_data_size": 4096, 00:13:51.207 "max_io_size": 131072, 00:13:51.207 "io_unit_size": 131072, 00:13:51.207 "max_aq_depth": 128, 00:13:51.207 "num_shared_buffers": 511, 00:13:51.207 "buf_cache_size": 4294967295, 00:13:51.207 "dif_insert_or_strip": false, 00:13:51.207 "zcopy": false, 00:13:51.207 "c2h_success": false, 00:13:51.207 "sock_priority": 0, 00:13:51.207 "abort_timeout_sec": 1, 00:13:51.207 "ack_timeout": 0, 00:13:51.207 "data_wr_pool_size": 0 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_create_subsystem", 00:13:51.207 "params": { 00:13:51.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.207 "allow_any_host": false, 00:13:51.207 "serial_number": "SPDK00000000000001", 00:13:51.207 "model_number": "SPDK bdev Controller", 00:13:51.207 "max_namespaces": 10, 00:13:51.207 "min_cntlid": 1, 00:13:51.207 "max_cntlid": 65519, 00:13:51.207 "ana_reporting": false 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_subsystem_add_host", 00:13:51.207 "params": { 00:13:51.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.207 "host": "nqn.2016-06.io.spdk:host1", 00:13:51.207 "psk": "key0" 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_subsystem_add_ns", 00:13:51.207 "params": { 00:13:51.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.207 "namespace": { 00:13:51.207 "nsid": 1, 00:13:51.207 "bdev_name": "malloc0", 00:13:51.207 "nguid": "86DC926A744D4DB9A23FD3BC5A8D8B8B", 00:13:51.207 "uuid": "86dc926a-744d-4db9-a23f-d3bc5a8d8b8b", 00:13:51.207 "no_auto_visible": false 00:13:51.207 } 00:13:51.207 } 00:13:51.207 }, 00:13:51.207 { 00:13:51.207 "method": "nvmf_subsystem_add_listener", 00:13:51.207 "params": { 00:13:51.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.207 "listen_address": { 00:13:51.207 "trtype": "TCP", 00:13:51.207 "adrfam": "IPv4", 00:13:51.207 "traddr": "10.0.0.3", 00:13:51.207 "trsvcid": "4420" 00:13:51.207 }, 00:13:51.207 "secure_channel": true 00:13:51.207 } 00:13:51.207 } 00:13:51.207 ] 00:13:51.207 } 00:13:51.207 ] 00:13:51.207 }' 00:13:51.207 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:51.467 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:51.467 "subsystems": [ 00:13:51.467 { 00:13:51.467 "subsystem": "keyring", 00:13:51.467 "config": [ 00:13:51.467 { 00:13:51.467 "method": "keyring_file_add_key", 00:13:51.467 "params": { 00:13:51.467 "name": "key0", 00:13:51.467 "path": "/tmp/tmp.qzb2ejOdP9" 00:13:51.467 } 00:13:51.467 } 00:13:51.467 ] 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "subsystem": "iobuf", 00:13:51.467 "config": [ 00:13:51.467 { 00:13:51.467 "method": "iobuf_set_options", 00:13:51.467 "params": { 00:13:51.467 "small_pool_count": 8192, 00:13:51.467 "large_pool_count": 1024, 00:13:51.467 "small_bufsize": 8192, 00:13:51.467 "large_bufsize": 135168, 00:13:51.467 "enable_numa": false 00:13:51.467 } 00:13:51.467 } 00:13:51.467 ] 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "subsystem": "sock", 00:13:51.467 "config": [ 00:13:51.467 { 00:13:51.467 "method": "sock_set_default_impl", 00:13:51.467 "params": { 00:13:51.467 "impl_name": "uring" 00:13:51.467 } 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "method": "sock_impl_set_options", 00:13:51.467 "params": { 00:13:51.467 "impl_name": "ssl", 00:13:51.467 "recv_buf_size": 4096, 00:13:51.467 "send_buf_size": 4096, 00:13:51.467 "enable_recv_pipe": true, 00:13:51.467 "enable_quickack": false, 00:13:51.467 "enable_placement_id": 0, 00:13:51.467 "enable_zerocopy_send_server": true, 00:13:51.467 "enable_zerocopy_send_client": false, 00:13:51.467 "zerocopy_threshold": 0, 00:13:51.467 "tls_version": 0, 00:13:51.467 "enable_ktls": false 00:13:51.467 } 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "method": "sock_impl_set_options", 00:13:51.467 "params": { 00:13:51.467 "impl_name": "posix", 00:13:51.467 "recv_buf_size": 2097152, 00:13:51.467 "send_buf_size": 2097152, 00:13:51.467 "enable_recv_pipe": true, 00:13:51.467 "enable_quickack": false, 00:13:51.467 "enable_placement_id": 0, 00:13:51.467 "enable_zerocopy_send_server": true, 00:13:51.467 "enable_zerocopy_send_client": false, 00:13:51.467 "zerocopy_threshold": 0, 00:13:51.467 "tls_version": 0, 00:13:51.467 "enable_ktls": false 00:13:51.467 } 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "method": "sock_impl_set_options", 00:13:51.467 "params": { 00:13:51.467 "impl_name": "uring", 00:13:51.467 "recv_buf_size": 2097152, 00:13:51.467 "send_buf_size": 2097152, 00:13:51.467 "enable_recv_pipe": true, 00:13:51.467 "enable_quickack": false, 00:13:51.467 "enable_placement_id": 0, 00:13:51.467 "enable_zerocopy_send_server": false, 00:13:51.467 "enable_zerocopy_send_client": false, 00:13:51.467 "zerocopy_threshold": 0, 00:13:51.467 "tls_version": 0, 00:13:51.467 "enable_ktls": false 00:13:51.467 } 00:13:51.467 } 00:13:51.467 ] 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "subsystem": "vmd", 00:13:51.467 "config": [] 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "subsystem": "accel", 00:13:51.467 "config": [ 00:13:51.467 { 00:13:51.467 "method": "accel_set_options", 00:13:51.467 "params": { 00:13:51.467 "small_cache_size": 128, 00:13:51.467 "large_cache_size": 16, 00:13:51.467 "task_count": 2048, 00:13:51.467 "sequence_count": 2048, 00:13:51.467 "buf_count": 2048 00:13:51.467 } 00:13:51.467 } 00:13:51.467 ] 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "subsystem": "bdev", 00:13:51.467 "config": [ 00:13:51.467 { 00:13:51.467 "method": "bdev_set_options", 00:13:51.467 "params": { 00:13:51.467 "bdev_io_pool_size": 65535, 00:13:51.467 "bdev_io_cache_size": 256, 00:13:51.467 "bdev_auto_examine": true, 00:13:51.467 "iobuf_small_cache_size": 128, 00:13:51.467 "iobuf_large_cache_size": 16 00:13:51.467 } 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "method": "bdev_raid_set_options", 00:13:51.467 "params": { 00:13:51.467 "process_window_size_kb": 1024, 00:13:51.467 "process_max_bandwidth_mb_sec": 0 00:13:51.467 } 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "method": "bdev_iscsi_set_options", 00:13:51.467 "params": { 00:13:51.467 "timeout_sec": 30 00:13:51.467 } 00:13:51.467 }, 00:13:51.467 { 00:13:51.467 "method": "bdev_nvme_set_options", 00:13:51.467 "params": { 00:13:51.467 "action_on_timeout": "none", 00:13:51.467 "timeout_us": 0, 00:13:51.467 "timeout_admin_us": 0, 00:13:51.467 "keep_alive_timeout_ms": 10000, 00:13:51.467 "arbitration_burst": 0, 00:13:51.467 "low_priority_weight": 0, 00:13:51.467 "medium_priority_weight": 0, 00:13:51.467 "high_priority_weight": 0, 00:13:51.467 "nvme_adminq_poll_period_us": 10000, 00:13:51.467 "nvme_ioq_poll_period_us": 0, 00:13:51.467 "io_queue_requests": 512, 00:13:51.468 "delay_cmd_submit": true, 00:13:51.468 "transport_retry_count": 4, 00:13:51.468 "bdev_retry_count": 3, 00:13:51.468 "transport_ack_timeout": 0, 00:13:51.468 "ctrlr_loss_timeout_sec": 0, 00:13:51.468 "reconnect_delay_sec": 0, 00:13:51.468 "fast_io_fail_timeout_sec": 0, 00:13:51.468 "disable_auto_failback": false, 00:13:51.468 "generate_uuids": false, 00:13:51.468 "transport_tos": 0, 00:13:51.468 "nvme_error_stat": false, 00:13:51.468 "rdma_srq_size": 0, 00:13:51.468 "io_path_stat": false, 00:13:51.468 "allow_accel_sequence": false, 00:13:51.468 "rdma_max_cq_size": 0, 00:13:51.468 "rdma_cm_event_timeout_ms": 0, 00:13:51.468 "dhchap_digests": [ 00:13:51.468 "sha256", 00:13:51.468 "sha384", 00:13:51.468 "sha512" 00:13:51.468 ], 00:13:51.468 "dhchap_dhgroups": [ 00:13:51.468 "null", 00:13:51.468 "ffdhe2048", 00:13:51.468 "ffdhe3072", 00:13:51.468 "ffdhe4096", 00:13:51.468 "ffdhe6144", 00:13:51.468 "ffdhe8192" 00:13:51.468 ], 00:13:51.468 "rdma_umr_per_io": false 00:13:51.468 } 00:13:51.468 }, 00:13:51.468 { 00:13:51.468 "method": "bdev_nvme_attach_controller", 00:13:51.468 "params": { 00:13:51.468 "name": "TLSTEST", 00:13:51.468 "trtype": "TCP", 00:13:51.468 "adrfam": "IPv4", 00:13:51.468 "traddr": "10.0.0.3", 00:13:51.468 "trsvcid": "4420", 00:13:51.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.468 "prchk_reftag": false, 00:13:51.468 "prchk_guard": false, 00:13:51.468 "ctrlr_loss_timeout_sec": 0, 00:13:51.468 "reconnect_delay_sec": 0, 00:13:51.468 "fast_io_fail_timeout_sec": 0, 00:13:51.468 "psk": "key0", 00:13:51.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.468 "hdgst": false, 00:13:51.468 "ddgst": false, 00:13:51.468 "multipath": "multipath" 00:13:51.468 } 00:13:51.468 }, 00:13:51.468 { 00:13:51.468 "method": "bdev_nvme_set_hotplug", 00:13:51.468 "params": { 00:13:51.468 "period_us": 100000, 00:13:51.468 "enable": false 00:13:51.468 } 00:13:51.468 }, 00:13:51.468 { 00:13:51.468 "method": "bdev_wait_for_examine" 00:13:51.468 } 00:13:51.468 ] 00:13:51.468 }, 00:13:51.468 { 00:13:51.468 "subsystem": "nbd", 00:13:51.468 "config": [] 00:13:51.468 } 00:13:51.468 ] 00:13:51.468 }' 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72826 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72826 ']' 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72826 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72826 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:51.468 killing process with pid 72826 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72826' 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72826 00:13:51.468 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.468 00:13:51.468 Latency(us) 00:13:51.468 [2024-12-11T08:47:59.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.468 [2024-12-11T08:47:59.242Z] =================================================================================================================== 00:13:51.468 [2024-12-11T08:47:59.242Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.468 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72826 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72778 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72778 ']' 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72778 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72778 00:13:51.728 killing process with pid 72778 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72778' 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72778 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72778 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:51.728 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:51.728 "subsystems": [ 00:13:51.728 { 00:13:51.728 "subsystem": "keyring", 00:13:51.728 "config": [ 00:13:51.728 { 00:13:51.728 "method": "keyring_file_add_key", 00:13:51.728 "params": { 00:13:51.728 "name": "key0", 00:13:51.728 "path": "/tmp/tmp.qzb2ejOdP9" 00:13:51.728 } 00:13:51.728 } 00:13:51.728 ] 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "subsystem": "iobuf", 00:13:51.728 "config": [ 00:13:51.728 { 00:13:51.728 "method": "iobuf_set_options", 00:13:51.728 "params": { 00:13:51.728 "small_pool_count": 8192, 00:13:51.728 "large_pool_count": 1024, 00:13:51.728 "small_bufsize": 8192, 00:13:51.728 "large_bufsize": 135168, 00:13:51.728 "enable_numa": false 00:13:51.728 } 00:13:51.728 } 00:13:51.728 ] 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "subsystem": "sock", 00:13:51.728 "config": [ 00:13:51.728 { 00:13:51.728 "method": "sock_set_default_impl", 00:13:51.728 "params": { 00:13:51.728 "impl_name": "uring" 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "sock_impl_set_options", 00:13:51.728 "params": { 00:13:51.728 "impl_name": "ssl", 00:13:51.728 "recv_buf_size": 4096, 00:13:51.728 "send_buf_size": 4096, 00:13:51.728 "enable_recv_pipe": true, 00:13:51.728 "enable_quickack": false, 00:13:51.728 "enable_placement_id": 0, 00:13:51.728 "enable_zerocopy_send_server": true, 00:13:51.728 "enable_zerocopy_send_client": false, 00:13:51.728 "zerocopy_threshold": 0, 00:13:51.728 "tls_version": 0, 00:13:51.728 "enable_ktls": false 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "sock_impl_set_options", 00:13:51.728 "params": { 00:13:51.728 "impl_name": "posix", 00:13:51.728 "recv_buf_size": 2097152, 00:13:51.728 "send_buf_size": 2097152, 00:13:51.728 "enable_recv_pipe": true, 00:13:51.728 "enable_quickack": false, 00:13:51.728 "enable_placement_id": 0, 00:13:51.728 "enable_zerocopy_send_server": true, 00:13:51.728 "enable_zerocopy_send_client": false, 00:13:51.728 "zerocopy_threshold": 0, 00:13:51.728 "tls_version": 0, 00:13:51.728 "enable_ktls": false 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "sock_impl_set_options", 00:13:51.728 "params": { 00:13:51.728 "impl_name": "uring", 00:13:51.728 "recv_buf_size": 2097152, 00:13:51.728 "send_buf_size": 2097152, 00:13:51.728 "enable_recv_pipe": true, 00:13:51.728 "enable_quickack": false, 00:13:51.728 "enable_placement_id": 0, 00:13:51.728 "enable_zerocopy_send_server": false, 00:13:51.728 "enable_zerocopy_send_client": false, 00:13:51.728 "zerocopy_threshold": 0, 00:13:51.728 "tls_version": 0, 00:13:51.728 "enable_ktls": false 00:13:51.728 } 00:13:51.728 } 00:13:51.728 ] 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "subsystem": "vmd", 00:13:51.728 "config": [] 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "subsystem": "accel", 00:13:51.728 "config": [ 00:13:51.728 { 00:13:51.728 "method": "accel_set_options", 00:13:51.728 "params": { 00:13:51.728 "small_cache_size": 128, 00:13:51.728 "large_cache_size": 16, 00:13:51.728 "task_count": 2048, 00:13:51.728 "sequence_count": 2048, 00:13:51.728 "buf_count": 2048 00:13:51.728 } 00:13:51.728 } 00:13:51.728 ] 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "subsystem": "bdev", 00:13:51.728 "config": [ 00:13:51.728 { 00:13:51.728 "method": "bdev_set_options", 00:13:51.728 "params": { 00:13:51.728 "bdev_io_pool_size": 65535, 00:13:51.728 "bdev_io_cache_size": 256, 00:13:51.728 "bdev_auto_examine": true, 00:13:51.728 "iobuf_small_cache_size": 128, 00:13:51.728 "iobuf_large_cache_size": 16 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "bdev_raid_set_options", 00:13:51.728 "params": { 00:13:51.728 "process_window_size_kb": 1024, 00:13:51.728 "process_max_bandwidth_mb_sec": 0 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "bdev_iscsi_set_options", 00:13:51.728 "params": { 00:13:51.728 "timeout_sec": 30 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "bdev_nvme_set_options", 00:13:51.728 "params": { 00:13:51.728 "action_on_timeout": "none", 00:13:51.728 "timeout_us": 0, 00:13:51.728 "timeout_admin_us": 0, 00:13:51.728 "keep_alive_timeout_ms": 10000, 00:13:51.728 "arbitration_burst": 0, 00:13:51.728 "low_priority_weight": 0, 00:13:51.728 "medium_priority_weight": 0, 00:13:51.728 "high_priority_weight": 0, 00:13:51.728 "nvme_adminq_poll_period_us": 10000, 00:13:51.728 "nvme_ioq_poll_period_us": 0, 00:13:51.728 "io_queue_requests": 0, 00:13:51.728 "delay_cmd_submit": true, 00:13:51.728 "transport_retry_count": 4, 00:13:51.728 "bdev_retry_count": 3, 00:13:51.728 "transport_ack_timeout": 0, 00:13:51.728 "ctrlr_loss_timeout_sec": 0, 00:13:51.728 "reconnect_delay_sec": 0, 00:13:51.728 "fast_io_fail_timeout_sec": 0, 00:13:51.728 "disable_auto_failback": false, 00:13:51.728 "generate_uuids": false, 00:13:51.728 "transport_tos": 0, 00:13:51.728 "nvme_error_stat": false, 00:13:51.728 "rdma_srq_size": 0, 00:13:51.728 "io_path_stat": false, 00:13:51.728 "allow_accel_sequence": false, 00:13:51.728 "rdma_max_cq_size": 0, 00:13:51.728 "rdma_cm_event_timeout_ms": 0, 00:13:51.728 "dhchap_digests": [ 00:13:51.728 "sha256", 00:13:51.728 "sha384", 00:13:51.728 "sha512" 00:13:51.728 ], 00:13:51.728 "dhchap_dhgroups": [ 00:13:51.728 "null", 00:13:51.728 "ffdhe2048", 00:13:51.728 "ffdhe3072", 00:13:51.728 "ffdhe4096", 00:13:51.728 "ffdhe6144", 00:13:51.728 "ffdhe8192" 00:13:51.728 ], 00:13:51.728 "rdma_umr_per_io": false 00:13:51.728 } 00:13:51.728 }, 00:13:51.728 { 00:13:51.728 "method": "bdev_nvme_set_hotplug", 00:13:51.729 "params": { 00:13:51.729 "period_us": 100000, 00:13:51.729 "enable": false 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "bdev_malloc_create", 00:13:51.729 "params": { 00:13:51.729 "name": "malloc0", 00:13:51.729 "num_blocks": 8192, 00:13:51.729 "block_size": 4096, 00:13:51.729 "physical_block_size": 4096, 00:13:51.729 "uuid": "86dc926a-744d-4db9-a23f-d3bc5a8d8b8b", 00:13:51.729 "optimal_io_boundary": 0, 00:13:51.729 "md_size": 0, 00:13:51.729 "dif_type": 0, 00:13:51.729 "dif_is_head_of_md": false, 00:13:51.729 "dif_pi_format": 0 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "bdev_wait_for_examine" 00:13:51.729 } 00:13:51.729 ] 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "subsystem": "nbd", 00:13:51.729 "config": [] 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "subsystem": "scheduler", 00:13:51.729 "config": [ 00:13:51.729 { 00:13:51.729 "method": "framework_set_scheduler", 00:13:51.729 "params": { 00:13:51.729 "name": "static" 00:13:51.729 } 00:13:51.729 } 00:13:51.729 ] 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "subsystem": "nvmf", 00:13:51.729 "config": [ 00:13:51.729 { 00:13:51.729 "method": "nvmf_set_config", 00:13:51.729 "params": { 00:13:51.729 "discovery_filter": "match_any", 00:13:51.729 "admin_cmd_passthru": { 00:13:51.729 "identify_ctrlr": false 00:13:51.729 }, 00:13:51.729 "dhchap_digests": [ 00:13:51.729 "sha256", 00:13:51.729 "sha384", 00:13:51.729 "sha512" 00:13:51.729 ], 00:13:51.729 "dhchap_dhgroups": [ 00:13:51.729 "null", 00:13:51.729 "ffdhe2048", 00:13:51.729 "ffdhe3072", 00:13:51.729 "ffdhe4096", 00:13:51.729 "ffdhe6144", 00:13:51.729 "ffdhe8192" 00:13:51.729 ] 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_set_max_subsystems", 00:13:51.729 "params": { 00:13:51.729 "max_subsystems": 1024 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_set_crdt", 00:13:51.729 "params": { 00:13:51.729 "crdt1": 0, 00:13:51.729 "crdt2": 0, 00:13:51.729 "crdt3": 0 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_create_transport", 00:13:51.729 "params": { 00:13:51.729 "trtype": "TCP", 00:13:51.729 "max_queue_depth": 128, 00:13:51.729 "max_io_qpairs_per_ctrlr": 127, 00:13:51.729 "in_capsule_data_size": 4096, 00:13:51.729 "max_io_size": 131072, 00:13:51.729 "io_unit_size": 131072, 00:13:51.729 "max_aq_depth": 128, 00:13:51.729 "num_shared_buffers": 511, 00:13:51.729 "buf_cache_size": 4294967295, 00:13:51.729 "dif_insert_or_strip": false, 00:13:51.729 "zcopy": false, 00:13:51.729 "c2h_success": false, 00:13:51.729 "sock_priority": 0, 00:13:51.729 "abort_timeout_sec": 1, 00:13:51.729 "ack_timeout": 0, 00:13:51.729 "data_wr_pool_size": 0 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_create_subsystem", 00:13:51.729 "params": { 00:13:51.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.729 "allow_any_host": false, 00:13:51.729 "serial_number": "SPDK00000000000001", 00:13:51.729 "model_number": "SPDK bdev Controller", 00:13:51.729 "max_namespaces": 10, 00:13:51.729 "min_cntlid": 1, 00:13:51.729 "max_cntlid": 65519, 00:13:51.729 "ana_reporting": false 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_subsystem_add_host", 00:13:51.729 "params": { 00:13:51.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.729 "host": "nqn.2016-06.io.spdk:host1", 00:13:51.729 "psk": "key0" 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_subsystem_add_ns", 00:13:51.729 "params": { 00:13:51.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.729 "namespace": { 00:13:51.729 "nsid": 1, 00:13:51.729 "bdev_name": "malloc0", 00:13:51.729 "nguid": "86DC926A744D4DB9A23FD3BC5A8D8B8B", 00:13:51.729 "uuid": "86dc926a-744d-4db9-a23f-d3bc5a8d8b8b", 00:13:51.729 "no_auto_visible": false 00:13:51.729 } 00:13:51.729 } 00:13:51.729 }, 00:13:51.729 { 00:13:51.729 "method": "nvmf_subsystem_add_listener", 00:13:51.729 "params": { 00:13:51.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.729 "listen_address": { 00:13:51.729 "trtype": "TCP", 00:13:51.729 "adrfam": "IPv4", 00:13:51.729 "traddr": "10.0.0.3", 00:13:51.729 "trsvcid": "4420" 00:13:51.729 }, 00:13:51.729 "secure_channel": true 00:13:51.729 } 00:13:51.729 } 00:13:51.729 ] 00:13:51.729 } 00:13:51.729 ] 00:13:51.729 }' 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72863 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72863 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72863 ']' 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.729 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.988 [2024-12-11 08:47:59.545210] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:51.988 [2024-12-11 08:47:59.545279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.988 [2024-12-11 08:47:59.687503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.988 [2024-12-11 08:47:59.720115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.988 [2024-12-11 08:47:59.720543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.989 [2024-12-11 08:47:59.720794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.989 [2024-12-11 08:47:59.720997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.989 [2024-12-11 08:47:59.721216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.989 [2024-12-11 08:47:59.722740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.248 [2024-12-11 08:47:59.864636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.248 [2024-12-11 08:47:59.923602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.248 [2024-12-11 08:47:59.955528] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:52.248 [2024-12-11 08:47:59.955879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:52.815 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.815 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:52.815 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.815 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.815 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72895 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72895 /var/tmp/bdevperf.sock 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72895 ']' 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.075 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:53.075 "subsystems": [ 00:13:53.075 { 00:13:53.075 "subsystem": "keyring", 00:13:53.075 "config": [ 00:13:53.075 { 00:13:53.075 "method": "keyring_file_add_key", 00:13:53.075 "params": { 00:13:53.075 "name": "key0", 00:13:53.075 "path": "/tmp/tmp.qzb2ejOdP9" 00:13:53.075 } 00:13:53.075 } 00:13:53.075 ] 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "subsystem": "iobuf", 00:13:53.075 "config": [ 00:13:53.075 { 00:13:53.075 "method": "iobuf_set_options", 00:13:53.075 "params": { 00:13:53.075 "small_pool_count": 8192, 00:13:53.075 "large_pool_count": 1024, 00:13:53.075 "small_bufsize": 8192, 00:13:53.075 "large_bufsize": 135168, 00:13:53.075 "enable_numa": false 00:13:53.075 } 00:13:53.075 } 00:13:53.075 ] 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "subsystem": "sock", 00:13:53.075 "config": [ 00:13:53.075 { 00:13:53.075 "method": "sock_set_default_impl", 00:13:53.075 "params": { 00:13:53.075 "impl_name": "uring" 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "sock_impl_set_options", 00:13:53.075 "params": { 00:13:53.075 "impl_name": "ssl", 00:13:53.075 "recv_buf_size": 4096, 00:13:53.075 "send_buf_size": 4096, 00:13:53.075 "enable_recv_pipe": true, 00:13:53.075 "enable_quickack": false, 00:13:53.075 "enable_placement_id": 0, 00:13:53.075 "enable_zerocopy_send_server": true, 00:13:53.075 "enable_zerocopy_send_client": false, 00:13:53.075 "zerocopy_threshold": 0, 00:13:53.075 "tls_version": 0, 00:13:53.075 "enable_ktls": false 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "sock_impl_set_options", 00:13:53.075 "params": { 00:13:53.075 "impl_name": "posix", 00:13:53.075 "recv_buf_size": 2097152, 00:13:53.075 "send_buf_size": 2097152, 00:13:53.075 "enable_recv_pipe": true, 00:13:53.075 "enable_quickack": false, 00:13:53.075 "enable_placement_id": 0, 00:13:53.075 "enable_zerocopy_send_server": true, 00:13:53.075 "enable_zerocopy_send_client": false, 00:13:53.075 "zerocopy_threshold": 0, 00:13:53.075 "tls_version": 0, 00:13:53.075 "enable_ktls": false 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "sock_impl_set_options", 00:13:53.075 "params": { 00:13:53.075 "impl_name": "uring", 00:13:53.075 "recv_buf_size": 2097152, 00:13:53.075 "send_buf_size": 2097152, 00:13:53.075 "enable_recv_pipe": true, 00:13:53.075 "enable_quickack": false, 00:13:53.075 "enable_placement_id": 0, 00:13:53.075 "enable_zerocopy_send_server": false, 00:13:53.075 "enable_zerocopy_send_client": false, 00:13:53.075 "zerocopy_threshold": 0, 00:13:53.075 "tls_version": 0, 00:13:53.075 "enable_ktls": false 00:13:53.075 } 00:13:53.075 } 00:13:53.075 ] 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "subsystem": "vmd", 00:13:53.075 "config": [] 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "subsystem": "accel", 00:13:53.075 "config": [ 00:13:53.075 { 00:13:53.075 "method": "accel_set_options", 00:13:53.075 "params": { 00:13:53.075 "small_cache_size": 128, 00:13:53.075 "large_cache_size": 16, 00:13:53.075 "task_count": 2048, 00:13:53.075 "sequence_count": 2048, 00:13:53.075 "buf_count": 2048 00:13:53.075 } 00:13:53.075 } 00:13:53.075 ] 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "subsystem": "bdev", 00:13:53.075 "config": [ 00:13:53.075 { 00:13:53.075 "method": "bdev_set_options", 00:13:53.075 "params": { 00:13:53.075 "bdev_io_pool_size": 65535, 00:13:53.075 "bdev_io_cache_size": 256, 00:13:53.075 "bdev_auto_examine": true, 00:13:53.075 "iobuf_small_cache_size": 128, 00:13:53.075 "iobuf_large_cache_size": 16 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "bdev_raid_set_options", 00:13:53.075 "params": { 00:13:53.075 "process_window_size_kb": 1024, 00:13:53.075 "process_max_bandwidth_mb_sec": 0 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "bdev_iscsi_set_options", 00:13:53.075 "params": { 00:13:53.075 "timeout_sec": 30 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "bdev_nvme_set_options", 00:13:53.075 "params": { 00:13:53.075 "action_on_timeout": "none", 00:13:53.075 "timeout_us": 0, 00:13:53.075 "timeout_admin_us": 0, 00:13:53.075 "keep_alive_timeout_ms": 10000, 00:13:53.075 "arbitration_burst": 0, 00:13:53.075 "low_priority_weight": 0, 00:13:53.075 "medium_priority_weight": 0, 00:13:53.075 "high_priority_weight": 0, 00:13:53.075 "nvme_adminq_poll_period_us": 10000, 00:13:53.075 "nvme_ioq_poll_period_us": 0, 00:13:53.075 "io_queue_requests": 512, 00:13:53.075 "delay_cmd_submit": true, 00:13:53.075 "transport_retry_count": 4, 00:13:53.075 "bdev_retry_count": 3, 00:13:53.075 "transport_ack_timeout": 0, 00:13:53.075 "ctrlr_loss_timeout_sec": 0, 00:13:53.075 "reconnect_delay_sec": 0, 00:13:53.075 "fast_io_fail_timeout_sec": 0, 00:13:53.075 "disable_auto_failback": false, 00:13:53.075 "generate_uuids": false, 00:13:53.075 "transport_tos": 0, 00:13:53.075 "nvme_error_stat": false, 00:13:53.075 "rdma_srq_size": 0, 00:13:53.075 "io_path_stat": false, 00:13:53.075 "allow_accel_sequence": false, 00:13:53.075 "rdma_max_cq_size": 0, 00:13:53.075 "rdma_cm_event_timeout_ms": 0, 00:13:53.075 "dhchap_digests": [ 00:13:53.075 "sha256", 00:13:53.075 "sha384", 00:13:53.075 "sha512" 00:13:53.075 ], 00:13:53.075 "dhchap_dhgroups": [ 00:13:53.075 "null", 00:13:53.075 "ffdhe2048", 00:13:53.075 "ffdhe3072", 00:13:53.075 "ffdhe4096", 00:13:53.075 "ffdhe6144", 00:13:53.075 "ffdhe8192" 00:13:53.075 ], 00:13:53.075 "rdma_umr_per_io": false 00:13:53.075 } 00:13:53.075 }, 00:13:53.075 { 00:13:53.075 "method": "bdev_nvme_attach_controller", 00:13:53.075 "params": { 00:13:53.075 "name": "TLSTEST", 00:13:53.075 "trtype": "TCP", 00:13:53.075 "adrfam": "IPv4", 00:13:53.075 "traddr": "10.0.0.3", 00:13:53.075 "trsvcid": "4420", 00:13:53.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.076 "prchk_reftag": false, 00:13:53.076 "prchk_guard": false, 00:13:53.076 "ctrlr_loss_timeout_sec": 0, 00:13:53.076 "reconnect_delay_sec": 0, 00:13:53.076 "fast_io_fail_timeout_sec": 0, 00:13:53.076 "psk": "key0", 00:13:53.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.076 "hdgst": false, 00:13:53.076 "ddgst": false, 00:13:53.076 "multipath": "multipath" 00:13:53.076 } 00:13:53.076 }, 00:13:53.076 { 00:13:53.076 "method": "bdev_nvme_set_hotplug", 00:13:53.076 "params": { 00:13:53.076 "period_us": 100000, 00:13:53.076 "enable": false 00:13:53.076 } 00:13:53.076 }, 00:13:53.076 { 00:13:53.076 "method": "bdev_wait_for_examine" 00:13:53.076 } 00:13:53.076 ] 00:13:53.076 }, 00:13:53.076 { 00:13:53.076 "subsystem": "nbd", 00:13:53.076 "config": [] 00:13:53.076 } 00:13:53.076 ] 00:13:53.076 }' 00:13:53.076 [2024-12-11 08:48:00.664002] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:13:53.076 [2024-12-11 08:48:00.664107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72895 ] 00:13:53.076 [2024-12-11 08:48:00.813357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.335 [2024-12-11 08:48:00.851965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.335 [2024-12-11 08:48:00.965707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.335 [2024-12-11 08:48:01.000799] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.902 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.902 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:53.902 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:54.160 Running I/O for 10 seconds... 00:13:56.032 3970.00 IOPS, 15.51 MiB/s [2024-12-11T08:48:05.183Z] 4107.00 IOPS, 16.04 MiB/s [2024-12-11T08:48:06.118Z] 4156.67 IOPS, 16.24 MiB/s [2024-12-11T08:48:07.122Z] 4185.00 IOPS, 16.35 MiB/s [2024-12-11T08:48:08.074Z] 4197.60 IOPS, 16.40 MiB/s [2024-12-11T08:48:09.010Z] 4204.50 IOPS, 16.42 MiB/s [2024-12-11T08:48:09.947Z] 4213.14 IOPS, 16.46 MiB/s [2024-12-11T08:48:10.884Z] 4224.25 IOPS, 16.50 MiB/s [2024-12-11T08:48:11.821Z] 4226.78 IOPS, 16.51 MiB/s [2024-12-11T08:48:11.822Z] 4232.20 IOPS, 16.53 MiB/s 00:14:04.048 Latency(us) 00:14:04.048 [2024-12-11T08:48:11.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.048 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:04.048 Verification LBA range: start 0x0 length 0x2000 00:14:04.048 TLSTESTn1 : 10.02 4238.57 16.56 0.00 0.00 30145.46 5153.51 24188.74 00:14:04.048 [2024-12-11T08:48:11.822Z] =================================================================================================================== 00:14:04.048 [2024-12-11T08:48:11.822Z] Total : 4238.57 16.56 0.00 0.00 30145.46 5153.51 24188.74 00:14:04.048 { 00:14:04.048 "results": [ 00:14:04.048 { 00:14:04.048 "job": "TLSTESTn1", 00:14:04.048 "core_mask": "0x4", 00:14:04.048 "workload": "verify", 00:14:04.048 "status": "finished", 00:14:04.048 "verify_range": { 00:14:04.048 "start": 0, 00:14:04.048 "length": 8192 00:14:04.048 }, 00:14:04.048 "queue_depth": 128, 00:14:04.048 "io_size": 4096, 00:14:04.048 "runtime": 10.015182, 00:14:04.048 "iops": 4238.565010600906, 00:14:04.048 "mibps": 16.55689457265979, 00:14:04.048 "io_failed": 0, 00:14:04.048 "io_timeout": 0, 00:14:04.048 "avg_latency_us": 30145.461417625014, 00:14:04.048 "min_latency_us": 5153.512727272728, 00:14:04.048 "max_latency_us": 24188.741818181818 00:14:04.048 } 00:14:04.048 ], 00:14:04.048 "core_count": 1 00:14:04.048 } 00:14:04.048 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.048 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72895 00:14:04.048 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72895 ']' 00:14:04.048 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72895 00:14:04.048 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:04.307 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72895 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72895' 00:14:04.308 killing process with pid 72895 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72895 00:14:04.308 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.308 00:14:04.308 Latency(us) 00:14:04.308 [2024-12-11T08:48:12.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.308 [2024-12-11T08:48:12.082Z] =================================================================================================================== 00:14:04.308 [2024-12-11T08:48:12.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72895 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72863 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72863 ']' 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72863 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.308 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72863 00:14:04.308 killing process with pid 72863 00:14:04.308 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:04.308 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:04.308 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72863' 00:14:04.308 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72863 00:14:04.308 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72863 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73033 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73033 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73033 ']' 00:14:04.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.567 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.567 [2024-12-11 08:48:12.214368] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:04.567 [2024-12-11 08:48:12.214612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.826 [2024-12-11 08:48:12.363458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.826 [2024-12-11 08:48:12.400922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.826 [2024-12-11 08:48:12.401270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.826 [2024-12-11 08:48:12.401295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.826 [2024-12-11 08:48:12.401305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.826 [2024-12-11 08:48:12.401314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.826 [2024-12-11 08:48:12.401684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.826 [2024-12-11 08:48:12.435471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qzb2ejOdP9 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzb2ejOdP9 00:14:04.826 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:05.085 [2024-12-11 08:48:12.754617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.085 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:05.344 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:05.603 [2024-12-11 08:48:13.294757] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:05.603 [2024-12-11 08:48:13.294967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.603 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:05.862 malloc0 00:14:05.862 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:06.121 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:14:06.380 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:06.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=73076 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 73076 /var/tmp/bdevperf.sock 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73076 ']' 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.639 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.640 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.640 [2024-12-11 08:48:14.306230] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:06.640 [2024-12-11 08:48:14.306345] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73076 ] 00:14:06.899 [2024-12-11 08:48:14.459353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.899 [2024-12-11 08:48:14.498768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.899 [2024-12-11 08:48:14.532817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.466 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.466 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:07.466 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:14:08.037 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:08.037 [2024-12-11 08:48:15.740048] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.295 nvme0n1 00:14:08.296 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:08.296 Running I/O for 1 seconds... 00:14:09.231 4213.00 IOPS, 16.46 MiB/s 00:14:09.231 Latency(us) 00:14:09.231 [2024-12-11T08:48:17.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.232 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:09.232 Verification LBA range: start 0x0 length 0x2000 00:14:09.232 nvme0n1 : 1.01 4280.56 16.72 0.00 0.00 29664.40 5064.15 24784.52 00:14:09.232 [2024-12-11T08:48:17.006Z] =================================================================================================================== 00:14:09.232 [2024-12-11T08:48:17.006Z] Total : 4280.56 16.72 0.00 0.00 29664.40 5064.15 24784.52 00:14:09.232 { 00:14:09.232 "results": [ 00:14:09.232 { 00:14:09.232 "job": "nvme0n1", 00:14:09.232 "core_mask": "0x2", 00:14:09.232 "workload": "verify", 00:14:09.232 "status": "finished", 00:14:09.232 "verify_range": { 00:14:09.232 "start": 0, 00:14:09.232 "length": 8192 00:14:09.232 }, 00:14:09.232 "queue_depth": 128, 00:14:09.232 "io_size": 4096, 00:14:09.232 "runtime": 1.014119, 00:14:09.232 "iops": 4280.562734748091, 00:14:09.232 "mibps": 16.72094818260973, 00:14:09.232 "io_failed": 0, 00:14:09.232 "io_timeout": 0, 00:14:09.232 "avg_latency_us": 29664.3966978702, 00:14:09.232 "min_latency_us": 5064.145454545454, 00:14:09.232 "max_latency_us": 24784.523636363636 00:14:09.232 } 00:14:09.232 ], 00:14:09.232 "core_count": 1 00:14:09.232 } 00:14:09.232 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 73076 00:14:09.232 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73076 ']' 00:14:09.232 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73076 00:14:09.232 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.232 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.232 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73076 00:14:09.491 killing process with pid 73076 00:14:09.491 Received shutdown signal, test time was about 1.000000 seconds 00:14:09.491 00:14:09.491 Latency(us) 00:14:09.491 [2024-12-11T08:48:17.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.491 [2024-12-11T08:48:17.265Z] =================================================================================================================== 00:14:09.491 [2024-12-11T08:48:17.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73076' 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73076 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73076 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 73033 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73033 ']' 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73033 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73033 00:14:09.491 killing process with pid 73033 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73033' 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73033 00:14:09.491 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73033 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73127 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73127 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73127 ']' 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.751 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.751 [2024-12-11 08:48:17.401839] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:09.751 [2024-12-11 08:48:17.401977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.010 [2024-12-11 08:48:17.548149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.010 [2024-12-11 08:48:17.577529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.010 [2024-12-11 08:48:17.577612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.010 [2024-12-11 08:48:17.577638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.010 [2024-12-11 08:48:17.577646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.010 [2024-12-11 08:48:17.577653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.010 [2024-12-11 08:48:17.577959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.010 [2024-12-11 08:48:17.608206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.947 [2024-12-11 08:48:18.437698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.947 malloc0 00:14:10.947 [2024-12-11 08:48:18.464296] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.947 [2024-12-11 08:48:18.464520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=73159 00:14:10.947 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 73159 /var/tmp/bdevperf.sock 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73159 ']' 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.948 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.948 [2024-12-11 08:48:18.551082] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:10.948 [2024-12-11 08:48:18.551185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:14:10.948 [2024-12-11 08:48:18.701289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.206 [2024-12-11 08:48:18.741558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.206 [2024-12-11 08:48:18.774159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:11.206 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.206 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:11.206 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzb2ejOdP9 00:14:11.465 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:11.724 [2024-12-11 08:48:19.292469] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.724 nvme0n1 00:14:11.724 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:11.724 Running I/O for 1 seconds... 00:14:13.101 4129.00 IOPS, 16.13 MiB/s 00:14:13.101 Latency(us) 00:14:13.101 [2024-12-11T08:48:20.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.101 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:13.101 Verification LBA range: start 0x0 length 0x2000 00:14:13.101 nvme0n1 : 1.02 4172.16 16.30 0.00 0.00 30266.59 3336.38 22520.55 00:14:13.101 [2024-12-11T08:48:20.875Z] =================================================================================================================== 00:14:13.101 [2024-12-11T08:48:20.875Z] Total : 4172.16 16.30 0.00 0.00 30266.59 3336.38 22520.55 00:14:13.101 { 00:14:13.101 "results": [ 00:14:13.101 { 00:14:13.101 "job": "nvme0n1", 00:14:13.101 "core_mask": "0x2", 00:14:13.101 "workload": "verify", 00:14:13.101 "status": "finished", 00:14:13.101 "verify_range": { 00:14:13.101 "start": 0, 00:14:13.101 "length": 8192 00:14:13.101 }, 00:14:13.101 "queue_depth": 128, 00:14:13.101 "io_size": 4096, 00:14:13.101 "runtime": 1.020335, 00:14:13.101 "iops": 4172.159143810612, 00:14:13.101 "mibps": 16.2974966555102, 00:14:13.101 "io_failed": 0, 00:14:13.101 "io_timeout": 0, 00:14:13.101 "avg_latency_us": 30266.585824417536, 00:14:13.101 "min_latency_us": 3336.378181818182, 00:14:13.101 "max_latency_us": 22520.552727272727 00:14:13.101 } 00:14:13.101 ], 00:14:13.101 "core_count": 1 00:14:13.101 } 00:14:13.101 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:13.101 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.101 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.101 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.101 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:13.101 "subsystems": [ 00:14:13.101 { 00:14:13.101 "subsystem": "keyring", 00:14:13.101 "config": [ 00:14:13.101 { 00:14:13.101 "method": "keyring_file_add_key", 00:14:13.101 "params": { 00:14:13.101 "name": "key0", 00:14:13.101 "path": "/tmp/tmp.qzb2ejOdP9" 00:14:13.101 } 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "iobuf", 00:14:13.101 "config": [ 00:14:13.101 { 00:14:13.101 "method": "iobuf_set_options", 00:14:13.101 "params": { 00:14:13.101 "small_pool_count": 8192, 00:14:13.101 "large_pool_count": 1024, 00:14:13.101 "small_bufsize": 8192, 00:14:13.101 "large_bufsize": 135168, 00:14:13.101 "enable_numa": false 00:14:13.101 } 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "sock", 00:14:13.101 "config": [ 00:14:13.101 { 00:14:13.101 "method": "sock_set_default_impl", 00:14:13.101 "params": { 00:14:13.101 "impl_name": "uring" 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "sock_impl_set_options", 00:14:13.101 "params": { 00:14:13.101 "impl_name": "ssl", 00:14:13.101 "recv_buf_size": 4096, 00:14:13.101 "send_buf_size": 4096, 00:14:13.101 "enable_recv_pipe": true, 00:14:13.101 "enable_quickack": false, 00:14:13.101 "enable_placement_id": 0, 00:14:13.101 "enable_zerocopy_send_server": true, 00:14:13.101 "enable_zerocopy_send_client": false, 00:14:13.101 "zerocopy_threshold": 0, 00:14:13.101 "tls_version": 0, 00:14:13.101 "enable_ktls": false 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "sock_impl_set_options", 00:14:13.101 "params": { 00:14:13.101 "impl_name": "posix", 00:14:13.101 "recv_buf_size": 2097152, 00:14:13.101 "send_buf_size": 2097152, 00:14:13.101 "enable_recv_pipe": true, 00:14:13.101 "enable_quickack": false, 00:14:13.101 "enable_placement_id": 0, 00:14:13.101 "enable_zerocopy_send_server": true, 00:14:13.101 "enable_zerocopy_send_client": false, 00:14:13.101 "zerocopy_threshold": 0, 00:14:13.101 "tls_version": 0, 00:14:13.101 "enable_ktls": false 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "sock_impl_set_options", 00:14:13.101 "params": { 00:14:13.101 "impl_name": "uring", 00:14:13.101 "recv_buf_size": 2097152, 00:14:13.101 "send_buf_size": 2097152, 00:14:13.101 "enable_recv_pipe": true, 00:14:13.101 "enable_quickack": false, 00:14:13.101 "enable_placement_id": 0, 00:14:13.101 "enable_zerocopy_send_server": false, 00:14:13.101 "enable_zerocopy_send_client": false, 00:14:13.101 "zerocopy_threshold": 0, 00:14:13.101 "tls_version": 0, 00:14:13.101 "enable_ktls": false 00:14:13.101 } 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "vmd", 00:14:13.101 "config": [] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "accel", 00:14:13.101 "config": [ 00:14:13.101 { 00:14:13.101 "method": "accel_set_options", 00:14:13.101 "params": { 00:14:13.101 "small_cache_size": 128, 00:14:13.101 "large_cache_size": 16, 00:14:13.101 "task_count": 2048, 00:14:13.101 "sequence_count": 2048, 00:14:13.101 "buf_count": 2048 00:14:13.101 } 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "bdev", 00:14:13.101 "config": [ 00:14:13.101 { 00:14:13.101 "method": "bdev_set_options", 00:14:13.101 "params": { 00:14:13.101 "bdev_io_pool_size": 65535, 00:14:13.101 "bdev_io_cache_size": 256, 00:14:13.101 "bdev_auto_examine": true, 00:14:13.101 "iobuf_small_cache_size": 128, 00:14:13.101 "iobuf_large_cache_size": 16 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "bdev_raid_set_options", 00:14:13.101 "params": { 00:14:13.101 "process_window_size_kb": 1024, 00:14:13.101 "process_max_bandwidth_mb_sec": 0 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "bdev_iscsi_set_options", 00:14:13.101 "params": { 00:14:13.101 "timeout_sec": 30 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "bdev_nvme_set_options", 00:14:13.101 "params": { 00:14:13.101 "action_on_timeout": "none", 00:14:13.101 "timeout_us": 0, 00:14:13.101 "timeout_admin_us": 0, 00:14:13.101 "keep_alive_timeout_ms": 10000, 00:14:13.101 "arbitration_burst": 0, 00:14:13.101 "low_priority_weight": 0, 00:14:13.101 "medium_priority_weight": 0, 00:14:13.101 "high_priority_weight": 0, 00:14:13.101 "nvme_adminq_poll_period_us": 10000, 00:14:13.101 "nvme_ioq_poll_period_us": 0, 00:14:13.101 "io_queue_requests": 0, 00:14:13.101 "delay_cmd_submit": true, 00:14:13.101 "transport_retry_count": 4, 00:14:13.101 "bdev_retry_count": 3, 00:14:13.101 "transport_ack_timeout": 0, 00:14:13.101 "ctrlr_loss_timeout_sec": 0, 00:14:13.101 "reconnect_delay_sec": 0, 00:14:13.101 "fast_io_fail_timeout_sec": 0, 00:14:13.101 "disable_auto_failback": false, 00:14:13.101 "generate_uuids": false, 00:14:13.101 "transport_tos": 0, 00:14:13.101 "nvme_error_stat": false, 00:14:13.101 "rdma_srq_size": 0, 00:14:13.101 "io_path_stat": false, 00:14:13.101 "allow_accel_sequence": false, 00:14:13.101 "rdma_max_cq_size": 0, 00:14:13.101 "rdma_cm_event_timeout_ms": 0, 00:14:13.101 "dhchap_digests": [ 00:14:13.101 "sha256", 00:14:13.101 "sha384", 00:14:13.101 "sha512" 00:14:13.101 ], 00:14:13.101 "dhchap_dhgroups": [ 00:14:13.101 "null", 00:14:13.101 "ffdhe2048", 00:14:13.101 "ffdhe3072", 00:14:13.101 "ffdhe4096", 00:14:13.101 "ffdhe6144", 00:14:13.101 "ffdhe8192" 00:14:13.101 ], 00:14:13.101 "rdma_umr_per_io": false 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "bdev_nvme_set_hotplug", 00:14:13.101 "params": { 00:14:13.101 "period_us": 100000, 00:14:13.101 "enable": false 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "bdev_malloc_create", 00:14:13.101 "params": { 00:14:13.101 "name": "malloc0", 00:14:13.101 "num_blocks": 8192, 00:14:13.101 "block_size": 4096, 00:14:13.101 "physical_block_size": 4096, 00:14:13.101 "uuid": "29ccc90a-2410-41bb-8489-1b4ece060ab4", 00:14:13.101 "optimal_io_boundary": 0, 00:14:13.101 "md_size": 0, 00:14:13.101 "dif_type": 0, 00:14:13.101 "dif_is_head_of_md": false, 00:14:13.101 "dif_pi_format": 0 00:14:13.101 } 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "method": "bdev_wait_for_examine" 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "nbd", 00:14:13.101 "config": [] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "scheduler", 00:14:13.101 "config": [ 00:14:13.101 { 00:14:13.101 "method": "framework_set_scheduler", 00:14:13.101 "params": { 00:14:13.101 "name": "static" 00:14:13.101 } 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "subsystem": "nvmf", 00:14:13.101 "config": [ 00:14:13.102 { 00:14:13.102 "method": "nvmf_set_config", 00:14:13.102 "params": { 00:14:13.102 "discovery_filter": "match_any", 00:14:13.102 "admin_cmd_passthru": { 00:14:13.102 "identify_ctrlr": false 00:14:13.102 }, 00:14:13.102 "dhchap_digests": [ 00:14:13.102 "sha256", 00:14:13.102 "sha384", 00:14:13.102 "sha512" 00:14:13.102 ], 00:14:13.102 "dhchap_dhgroups": [ 00:14:13.102 "null", 00:14:13.102 "ffdhe2048", 00:14:13.102 "ffdhe3072", 00:14:13.102 "ffdhe4096", 00:14:13.102 "ffdhe6144", 00:14:13.102 "ffdhe8192" 00:14:13.102 ] 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_set_max_subsystems", 00:14:13.102 "params": { 00:14:13.102 "max_subsystems": 1024 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_set_crdt", 00:14:13.102 "params": { 00:14:13.102 "crdt1": 0, 00:14:13.102 "crdt2": 0, 00:14:13.102 "crdt3": 0 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_create_transport", 00:14:13.102 "params": { 00:14:13.102 "trtype": "TCP", 00:14:13.102 "max_queue_depth": 128, 00:14:13.102 "max_io_qpairs_per_ctrlr": 127, 00:14:13.102 "in_capsule_data_size": 4096, 00:14:13.102 "max_io_size": 131072, 00:14:13.102 "io_unit_size": 131072, 00:14:13.102 "max_aq_depth": 128, 00:14:13.102 "num_shared_buffers": 511, 00:14:13.102 "buf_cache_size": 4294967295, 00:14:13.102 "dif_insert_or_strip": false, 00:14:13.102 "zcopy": false, 00:14:13.102 "c2h_success": false, 00:14:13.102 "sock_priority": 0, 00:14:13.102 "abort_timeout_sec": 1, 00:14:13.102 "ack_timeout": 0, 00:14:13.102 "data_wr_pool_size": 0 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_create_subsystem", 00:14:13.102 "params": { 00:14:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.102 "allow_any_host": false, 00:14:13.102 "serial_number": "00000000000000000000", 00:14:13.102 "model_number": "SPDK bdev Controller", 00:14:13.102 "max_namespaces": 32, 00:14:13.102 "min_cntlid": 1, 00:14:13.102 "max_cntlid": 65519, 00:14:13.102 "ana_reporting": false 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_subsystem_add_host", 00:14:13.102 "params": { 00:14:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.102 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.102 "psk": "key0" 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_subsystem_add_ns", 00:14:13.102 "params": { 00:14:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.102 "namespace": { 00:14:13.102 "nsid": 1, 00:14:13.102 "bdev_name": "malloc0", 00:14:13.102 "nguid": "29CCC90A241041BB84891B4ECE060AB4", 00:14:13.102 "uuid": "29ccc90a-2410-41bb-8489-1b4ece060ab4", 00:14:13.102 "no_auto_visible": false 00:14:13.102 } 00:14:13.102 } 00:14:13.102 }, 00:14:13.102 { 00:14:13.102 "method": "nvmf_subsystem_add_listener", 00:14:13.102 "params": { 00:14:13.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.102 "listen_address": { 00:14:13.102 "trtype": "TCP", 00:14:13.102 "adrfam": "IPv4", 00:14:13.102 "traddr": "10.0.0.3", 00:14:13.102 "trsvcid": "4420" 00:14:13.102 }, 00:14:13.102 "secure_channel": false, 00:14:13.102 "sock_impl": "ssl" 00:14:13.102 } 00:14:13.102 } 00:14:13.102 ] 00:14:13.102 } 00:14:13.102 ] 00:14:13.102 }' 00:14:13.102 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:13.361 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:13.361 "subsystems": [ 00:14:13.361 { 00:14:13.361 "subsystem": "keyring", 00:14:13.361 "config": [ 00:14:13.361 { 00:14:13.361 "method": "keyring_file_add_key", 00:14:13.361 "params": { 00:14:13.361 "name": "key0", 00:14:13.361 "path": "/tmp/tmp.qzb2ejOdP9" 00:14:13.361 } 00:14:13.361 } 00:14:13.361 ] 00:14:13.361 }, 00:14:13.361 { 00:14:13.361 "subsystem": "iobuf", 00:14:13.361 "config": [ 00:14:13.361 { 00:14:13.361 "method": "iobuf_set_options", 00:14:13.361 "params": { 00:14:13.361 "small_pool_count": 8192, 00:14:13.361 "large_pool_count": 1024, 00:14:13.361 "small_bufsize": 8192, 00:14:13.361 "large_bufsize": 135168, 00:14:13.361 "enable_numa": false 00:14:13.361 } 00:14:13.361 } 00:14:13.361 ] 00:14:13.361 }, 00:14:13.361 { 00:14:13.361 "subsystem": "sock", 00:14:13.361 "config": [ 00:14:13.361 { 00:14:13.361 "method": "sock_set_default_impl", 00:14:13.361 "params": { 00:14:13.361 "impl_name": "uring" 00:14:13.361 } 00:14:13.361 }, 00:14:13.361 { 00:14:13.361 "method": "sock_impl_set_options", 00:14:13.361 "params": { 00:14:13.361 "impl_name": "ssl", 00:14:13.361 "recv_buf_size": 4096, 00:14:13.361 "send_buf_size": 4096, 00:14:13.361 "enable_recv_pipe": true, 00:14:13.361 "enable_quickack": false, 00:14:13.361 "enable_placement_id": 0, 00:14:13.361 "enable_zerocopy_send_server": true, 00:14:13.361 "enable_zerocopy_send_client": false, 00:14:13.361 "zerocopy_threshold": 0, 00:14:13.362 "tls_version": 0, 00:14:13.362 "enable_ktls": false 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "sock_impl_set_options", 00:14:13.362 "params": { 00:14:13.362 "impl_name": "posix", 00:14:13.362 "recv_buf_size": 2097152, 00:14:13.362 "send_buf_size": 2097152, 00:14:13.362 "enable_recv_pipe": true, 00:14:13.362 "enable_quickack": false, 00:14:13.362 "enable_placement_id": 0, 00:14:13.362 "enable_zerocopy_send_server": true, 00:14:13.362 "enable_zerocopy_send_client": false, 00:14:13.362 "zerocopy_threshold": 0, 00:14:13.362 "tls_version": 0, 00:14:13.362 "enable_ktls": false 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "sock_impl_set_options", 00:14:13.362 "params": { 00:14:13.362 "impl_name": "uring", 00:14:13.362 "recv_buf_size": 2097152, 00:14:13.362 "send_buf_size": 2097152, 00:14:13.362 "enable_recv_pipe": true, 00:14:13.362 "enable_quickack": false, 00:14:13.362 "enable_placement_id": 0, 00:14:13.362 "enable_zerocopy_send_server": false, 00:14:13.362 "enable_zerocopy_send_client": false, 00:14:13.362 "zerocopy_threshold": 0, 00:14:13.362 "tls_version": 0, 00:14:13.362 "enable_ktls": false 00:14:13.362 } 00:14:13.362 } 00:14:13.362 ] 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "subsystem": "vmd", 00:14:13.362 "config": [] 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "subsystem": "accel", 00:14:13.362 "config": [ 00:14:13.362 { 00:14:13.362 "method": "accel_set_options", 00:14:13.362 "params": { 00:14:13.362 "small_cache_size": 128, 00:14:13.362 "large_cache_size": 16, 00:14:13.362 "task_count": 2048, 00:14:13.362 "sequence_count": 2048, 00:14:13.362 "buf_count": 2048 00:14:13.362 } 00:14:13.362 } 00:14:13.362 ] 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "subsystem": "bdev", 00:14:13.362 "config": [ 00:14:13.362 { 00:14:13.362 "method": "bdev_set_options", 00:14:13.362 "params": { 00:14:13.362 "bdev_io_pool_size": 65535, 00:14:13.362 "bdev_io_cache_size": 256, 00:14:13.362 "bdev_auto_examine": true, 00:14:13.362 "iobuf_small_cache_size": 128, 00:14:13.362 "iobuf_large_cache_size": 16 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_raid_set_options", 00:14:13.362 "params": { 00:14:13.362 "process_window_size_kb": 1024, 00:14:13.362 "process_max_bandwidth_mb_sec": 0 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_iscsi_set_options", 00:14:13.362 "params": { 00:14:13.362 "timeout_sec": 30 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_nvme_set_options", 00:14:13.362 "params": { 00:14:13.362 "action_on_timeout": "none", 00:14:13.362 "timeout_us": 0, 00:14:13.362 "timeout_admin_us": 0, 00:14:13.362 "keep_alive_timeout_ms": 10000, 00:14:13.362 "arbitration_burst": 0, 00:14:13.362 "low_priority_weight": 0, 00:14:13.362 "medium_priority_weight": 0, 00:14:13.362 "high_priority_weight": 0, 00:14:13.362 "nvme_adminq_poll_period_us": 10000, 00:14:13.362 "nvme_ioq_poll_period_us": 0, 00:14:13.362 "io_queue_requests": 512, 00:14:13.362 "delay_cmd_submit": true, 00:14:13.362 "transport_retry_count": 4, 00:14:13.362 "bdev_retry_count": 3, 00:14:13.362 "transport_ack_timeout": 0, 00:14:13.362 "ctrlr_loss_timeout_sec": 0, 00:14:13.362 "reconnect_delay_sec": 0, 00:14:13.362 "fast_io_fail_timeout_sec": 0, 00:14:13.362 "disable_auto_failback": false, 00:14:13.362 "generate_uuids": false, 00:14:13.362 "transport_tos": 0, 00:14:13.362 "nvme_error_stat": false, 00:14:13.362 "rdma_srq_size": 0, 00:14:13.362 "io_path_stat": false, 00:14:13.362 "allow_accel_sequence": false, 00:14:13.362 "rdma_max_cq_size": 0, 00:14:13.362 "rdma_cm_event_timeout_ms": 0, 00:14:13.362 "dhchap_digests": [ 00:14:13.362 "sha256", 00:14:13.362 "sha384", 00:14:13.362 "sha512" 00:14:13.362 ], 00:14:13.362 "dhchap_dhgroups": [ 00:14:13.362 "null", 00:14:13.362 "ffdhe2048", 00:14:13.362 "ffdhe3072", 00:14:13.362 "ffdhe4096", 00:14:13.362 "ffdhe6144", 00:14:13.362 "ffdhe8192" 00:14:13.362 ], 00:14:13.362 "rdma_umr_per_io": false 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_nvme_attach_controller", 00:14:13.362 "params": { 00:14:13.362 "name": "nvme0", 00:14:13.362 "trtype": "TCP", 00:14:13.362 "adrfam": "IPv4", 00:14:13.362 "traddr": "10.0.0.3", 00:14:13.362 "trsvcid": "4420", 00:14:13.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.362 "prchk_reftag": false, 00:14:13.362 "prchk_guard": false, 00:14:13.362 "ctrlr_loss_timeout_sec": 0, 00:14:13.362 "reconnect_delay_sec": 0, 00:14:13.362 "fast_io_fail_timeout_sec": 0, 00:14:13.362 "psk": "key0", 00:14:13.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.362 "hdgst": false, 00:14:13.362 "ddgst": false, 00:14:13.362 "multipath": "multipath" 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_nvme_set_hotplug", 00:14:13.362 "params": { 00:14:13.362 "period_us": 100000, 00:14:13.362 "enable": false 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_enable_histogram", 00:14:13.362 "params": { 00:14:13.362 "name": "nvme0n1", 00:14:13.362 "enable": true 00:14:13.362 } 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "method": "bdev_wait_for_examine" 00:14:13.362 } 00:14:13.362 ] 00:14:13.362 }, 00:14:13.362 { 00:14:13.362 "subsystem": "nbd", 00:14:13.362 "config": [] 00:14:13.362 } 00:14:13.362 ] 00:14:13.362 }' 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 73159 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73159 ']' 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73159 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73159 00:14:13.362 killing process with pid 73159 00:14:13.362 Received shutdown signal, test time was about 1.000000 seconds 00:14:13.362 00:14:13.362 Latency(us) 00:14:13.362 [2024-12-11T08:48:21.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.362 [2024-12-11T08:48:21.136Z] =================================================================================================================== 00:14:13.362 [2024-12-11T08:48:21.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73159' 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73159 00:14:13.362 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73159 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 73127 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73127 ']' 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73127 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73127 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.622 killing process with pid 73127 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73127' 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73127 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73127 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.622 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:13.622 "subsystems": [ 00:14:13.622 { 00:14:13.622 "subsystem": "keyring", 00:14:13.622 "config": [ 00:14:13.622 { 00:14:13.622 "method": "keyring_file_add_key", 00:14:13.622 "params": { 00:14:13.622 "name": "key0", 00:14:13.622 "path": "/tmp/tmp.qzb2ejOdP9" 00:14:13.622 } 00:14:13.622 } 00:14:13.622 ] 00:14:13.622 }, 00:14:13.622 { 00:14:13.622 "subsystem": "iobuf", 00:14:13.622 "config": [ 00:14:13.622 { 00:14:13.622 "method": "iobuf_set_options", 00:14:13.622 "params": { 00:14:13.622 "small_pool_count": 8192, 00:14:13.622 "large_pool_count": 1024, 00:14:13.622 "small_bufsize": 8192, 00:14:13.622 "large_bufsize": 135168, 00:14:13.622 "enable_numa": false 00:14:13.622 } 00:14:13.622 } 00:14:13.622 ] 00:14:13.622 }, 00:14:13.622 { 00:14:13.622 "subsystem": "sock", 00:14:13.622 "config": [ 00:14:13.622 { 00:14:13.622 "method": "sock_set_default_impl", 00:14:13.622 "params": { 00:14:13.622 "impl_name": "uring" 00:14:13.622 } 00:14:13.622 }, 00:14:13.622 { 00:14:13.622 "method": "sock_impl_set_options", 00:14:13.622 "params": { 00:14:13.622 "impl_name": "ssl", 00:14:13.622 "recv_buf_size": 4096, 00:14:13.622 "send_buf_size": 4096, 00:14:13.622 "enable_recv_pipe": true, 00:14:13.622 "enable_quickack": false, 00:14:13.622 "enable_placement_id": 0, 00:14:13.622 "enable_zerocopy_send_server": true, 00:14:13.622 "enable_zerocopy_send_client": false, 00:14:13.622 "zerocopy_threshold": 0, 00:14:13.622 "tls_version": 0, 00:14:13.622 "enable_ktls": false 00:14:13.622 } 00:14:13.622 }, 00:14:13.622 { 00:14:13.622 "method": "sock_impl_set_options", 00:14:13.622 "params": { 00:14:13.622 "impl_name": "posix", 00:14:13.622 "recv_buf_size": 2097152, 00:14:13.622 "send_buf_size": 2097152, 00:14:13.622 "enable_recv_pipe": true, 00:14:13.622 "enable_quickack": false, 00:14:13.622 "enable_placement_id": 0, 00:14:13.622 "enable_zerocopy_send_server": true, 00:14:13.622 "enable_zerocopy_send_client": false, 00:14:13.622 "zerocopy_threshold": 0, 00:14:13.622 "tls_version": 0, 00:14:13.622 "enable_ktls": false 00:14:13.622 } 00:14:13.622 }, 00:14:13.622 { 00:14:13.622 "method": "sock_impl_set_options", 00:14:13.622 "params": { 00:14:13.622 "impl_name": "uring", 00:14:13.622 "recv_buf_size": 2097152, 00:14:13.622 "send_buf_size": 2097152, 00:14:13.622 "enable_recv_pipe": true, 00:14:13.622 "enable_quickack": false, 00:14:13.622 "enable_placement_id": 0, 00:14:13.622 "enable_zerocopy_send_server": false, 00:14:13.622 "enable_zerocopy_send_client": false, 00:14:13.622 "zerocopy_threshold": 0, 00:14:13.623 "tls_version": 0, 00:14:13.623 "enable_ktls": false 00:14:13.623 } 00:14:13.623 } 00:14:13.623 ] 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "subsystem": "vmd", 00:14:13.623 "config": [] 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "subsystem": "accel", 00:14:13.623 "config": [ 00:14:13.623 { 00:14:13.623 "method": "accel_set_options", 00:14:13.623 "params": { 00:14:13.623 "small_cache_size": 128, 00:14:13.623 "large_cache_size": 16, 00:14:13.623 "task_count": 2048, 00:14:13.623 "sequence_count": 2048, 00:14:13.623 "buf_count": 2048 00:14:13.623 } 00:14:13.623 } 00:14:13.623 ] 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "subsystem": "bdev", 00:14:13.623 "config": [ 00:14:13.623 { 00:14:13.623 "method": "bdev_set_options", 00:14:13.623 "params": { 00:14:13.623 "bdev_io_pool_size": 65535, 00:14:13.623 "bdev_io_cache_size": 256, 00:14:13.623 "bdev_auto_examine": true, 00:14:13.623 "iobuf_small_cache_size": 128, 00:14:13.623 "iobuf_large_cache_size": 16 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "bdev_raid_set_options", 00:14:13.623 "params": { 00:14:13.623 "process_window_size_kb": 1024, 00:14:13.623 "process_max_bandwidth_mb_sec": 0 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "bdev_iscsi_set_options", 00:14:13.623 "params": { 00:14:13.623 "timeout_sec": 30 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "bdev_nvme_set_options", 00:14:13.623 "params": { 00:14:13.623 "action_on_timeout": "none", 00:14:13.623 "timeout_us": 0, 00:14:13.623 "timeout_admin_us": 0, 00:14:13.623 "keep_alive_timeout_ms": 10000, 00:14:13.623 "arbitration_burst": 0, 00:14:13.623 "low_priority_weight": 0, 00:14:13.623 "medium_priority_weight": 0, 00:14:13.623 "high_priority_weight": 0, 00:14:13.623 "nvme_adminq_poll_period_us": 10000, 00:14:13.623 "nvme_ioq_poll_period_us": 0, 00:14:13.623 "io_queue_requests": 0, 00:14:13.623 "delay_cmd_submit": true, 00:14:13.623 "transport_retry_count": 4, 00:14:13.623 "bdev_retry_count": 3, 00:14:13.623 "transport_ack_timeout": 0, 00:14:13.623 "ctrlr_loss_timeout_sec": 0, 00:14:13.623 "reconnect_delay_sec": 0, 00:14:13.623 "fast_io_fail_timeout_sec": 0, 00:14:13.623 "disable_auto_failback": false, 00:14:13.623 "generate_uuids": false, 00:14:13.623 "transport_tos": 0, 00:14:13.623 "nvme_error_stat": false, 00:14:13.623 "rdma_srq_size": 0, 00:14:13.623 "io_path_stat": false, 00:14:13.623 "allow_accel_sequence": false, 00:14:13.623 "rdma_max_cq_size": 0, 00:14:13.623 "rdma_cm_event_timeout_ms": 0, 00:14:13.623 "dhchap_digests": [ 00:14:13.623 "sha256", 00:14:13.623 "sha384", 00:14:13.623 "sha512" 00:14:13.623 ], 00:14:13.623 "dhchap_dhgroups": [ 00:14:13.623 "null", 00:14:13.623 "ffdhe2048", 00:14:13.623 "ffdhe3072", 00:14:13.623 "ffdhe4096", 00:14:13.623 "ffdhe6144", 00:14:13.623 "ffdhe8192" 00:14:13.623 ], 00:14:13.623 "rdma_umr_per_io": false 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "bdev_nvme_set_hotplug", 00:14:13.623 "params": { 00:14:13.623 "period_us": 100000, 00:14:13.623 "enable": false 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "bdev_malloc_create", 00:14:13.623 "params": { 00:14:13.623 "name": "malloc0", 00:14:13.623 "num_blocks": 8192, 00:14:13.623 "block_size": 4096, 00:14:13.623 "physical_block_size": 4096, 00:14:13.623 "uuid": "29ccc90a-2410-41bb-8489-1b4ece060ab4", 00:14:13.623 "optimal_io_boundary": 0, 00:14:13.623 "md_size": 0, 00:14:13.623 "dif_type": 0, 00:14:13.623 "dif_is_head_of_md": false, 00:14:13.623 "dif_pi_format": 0 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "bdev_wait_for_examine" 00:14:13.623 } 00:14:13.623 ] 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "subsystem": "nbd", 00:14:13.623 "config": [] 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "subsystem": "scheduler", 00:14:13.623 "config": [ 00:14:13.623 { 00:14:13.623 "method": "framework_set_scheduler", 00:14:13.623 "params": { 00:14:13.623 "name": "static" 00:14:13.623 } 00:14:13.623 } 00:14:13.623 ] 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "subsystem": "nvmf", 00:14:13.623 "config": [ 00:14:13.623 { 00:14:13.623 "method": "nvmf_set_config", 00:14:13.623 "params": { 00:14:13.623 "discovery_filter": "match_any", 00:14:13.623 "admin_cmd_passthru": { 00:14:13.623 "identify_ctrlr": false 00:14:13.623 }, 00:14:13.623 "dhchap_digests": [ 00:14:13.623 "sha256", 00:14:13.623 "sha384", 00:14:13.623 "sha512" 00:14:13.623 ], 00:14:13.623 "dhchap_dhgroups": [ 00:14:13.623 "null", 00:14:13.623 "ffdhe2048", 00:14:13.623 "ffdhe3072", 00:14:13.623 "ffdhe4096", 00:14:13.623 "ffdhe6144", 00:14:13.623 "ffdhe8192" 00:14:13.623 ] 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_set_max_subsystems", 00:14:13.623 "params": { 00:14:13.623 "max_subsystems": 1024 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_set_crdt", 00:14:13.623 "params": { 00:14:13.623 "crdt1": 0, 00:14:13.623 "crdt2": 0, 00:14:13.623 "crdt3": 0 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_create_transport", 00:14:13.623 "params": { 00:14:13.623 "trtype": "TCP", 00:14:13.623 "max_queue_depth": 128, 00:14:13.623 "max_io_qpairs_per_ctrlr": 127, 00:14:13.623 "in_capsule_data_size": 4096, 00:14:13.623 "max_io_size": 131072, 00:14:13.623 "io_unit_size": 131072, 00:14:13.623 "max_aq_depth": 128, 00:14:13.623 "num_shared_buffers": 511, 00:14:13.623 "buf_cache_size": 4294967295, 00:14:13.623 "dif_insert_or_strip": false, 00:14:13.623 "zcopy": false, 00:14:13.623 "c2h_success": false, 00:14:13.623 "sock_priority": 0, 00:14:13.623 "abort_timeout_sec": 1, 00:14:13.623 "ack_timeout": 0, 00:14:13.623 "data_wr_pool_size": 0 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_create_subsystem", 00:14:13.623 "params": { 00:14:13.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.623 "allow_any_host": false, 00:14:13.623 "serial_number": "00000000000000000000", 00:14:13.623 "model_number": "SPDK bdev Controller", 00:14:13.623 "max_namespaces": 32, 00:14:13.623 "min_cntlid": 1, 00:14:13.623 "max_cntlid": 65519, 00:14:13.623 "ana_reporting": false 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_subsystem_add_host", 00:14:13.623 "params": { 00:14:13.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.623 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.623 "psk": "key0" 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_subsystem_add_ns", 00:14:13.623 "params": { 00:14:13.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.623 "namespace": { 00:14:13.623 "nsid": 1, 00:14:13.623 "bdev_name": "malloc0", 00:14:13.623 "nguid": "29CCC90A241041BB84891B4ECE060AB4", 00:14:13.623 "uuid": "29ccc90a-2410-41bb-8489-1b4ece060ab4", 00:14:13.623 "no_auto_visible": false 00:14:13.623 } 00:14:13.623 } 00:14:13.623 }, 00:14:13.623 { 00:14:13.623 "method": "nvmf_subsystem_add_listener", 00:14:13.623 "params": { 00:14:13.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.623 "listen_address": { 00:14:13.623 "trtype": "TCP", 00:14:13.623 "adrfam": "IPv4", 00:14:13.623 "traddr": "10.0.0.3", 00:14:13.623 "trsvcid": "4420" 00:14:13.623 }, 00:14:13.623 "secure_channel": false, 00:14:13.623 "sock_impl": "ssl" 00:14:13.623 } 00:14:13.623 } 00:14:13.623 ] 00:14:13.623 } 00:14:13.623 ] 00:14:13.623 }' 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73212 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73212 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73212 ']' 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.623 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.883 [2024-12-11 08:48:21.411722] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:13.883 [2024-12-11 08:48:21.412268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.883 [2024-12-11 08:48:21.556369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.883 [2024-12-11 08:48:21.586292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.883 [2024-12-11 08:48:21.586357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.883 [2024-12-11 08:48:21.586384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.883 [2024-12-11 08:48:21.586392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.883 [2024-12-11 08:48:21.586399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.883 [2024-12-11 08:48:21.586789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.141 [2024-12-11 08:48:21.729101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.141 [2024-12-11 08:48:21.789414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.141 [2024-12-11 08:48:21.821379] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:14.141 [2024-12-11 08:48:21.821603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=73244 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 73244 /var/tmp/bdevperf.sock 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73244 ']' 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.709 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:14.709 "subsystems": [ 00:14:14.709 { 00:14:14.709 "subsystem": "keyring", 00:14:14.709 "config": [ 00:14:14.709 { 00:14:14.709 "method": "keyring_file_add_key", 00:14:14.709 "params": { 00:14:14.709 "name": "key0", 00:14:14.709 "path": "/tmp/tmp.qzb2ejOdP9" 00:14:14.709 } 00:14:14.709 } 00:14:14.709 ] 00:14:14.709 }, 00:14:14.709 { 00:14:14.709 "subsystem": "iobuf", 00:14:14.709 "config": [ 00:14:14.709 { 00:14:14.709 "method": "iobuf_set_options", 00:14:14.709 "params": { 00:14:14.709 "small_pool_count": 8192, 00:14:14.709 "large_pool_count": 1024, 00:14:14.709 "small_bufsize": 8192, 00:14:14.709 "large_bufsize": 135168, 00:14:14.709 "enable_numa": false 00:14:14.709 } 00:14:14.709 } 00:14:14.709 ] 00:14:14.709 }, 00:14:14.709 { 00:14:14.709 "subsystem": "sock", 00:14:14.709 "config": [ 00:14:14.709 { 00:14:14.709 "method": "sock_set_default_impl", 00:14:14.709 "params": { 00:14:14.709 "impl_name": "uring" 00:14:14.709 } 00:14:14.709 }, 00:14:14.709 { 00:14:14.709 "method": "sock_impl_set_options", 00:14:14.709 "params": { 00:14:14.709 "impl_name": "ssl", 00:14:14.709 "recv_buf_size": 4096, 00:14:14.709 "send_buf_size": 4096, 00:14:14.709 "enable_recv_pipe": true, 00:14:14.709 "enable_quickack": false, 00:14:14.709 "enable_placement_id": 0, 00:14:14.709 "enable_zerocopy_send_server": true, 00:14:14.709 "enable_zerocopy_send_client": false, 00:14:14.709 "zerocopy_threshold": 0, 00:14:14.709 "tls_version": 0, 00:14:14.709 "enable_ktls": false 00:14:14.709 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "sock_impl_set_options", 00:14:14.710 "params": { 00:14:14.710 "impl_name": "posix", 00:14:14.710 "recv_buf_size": 2097152, 00:14:14.710 "send_buf_size": 2097152, 00:14:14.710 "enable_recv_pipe": true, 00:14:14.710 "enable_quickack": false, 00:14:14.710 "enable_placement_id": 0, 00:14:14.710 "enable_zerocopy_send_server": true, 00:14:14.710 "enable_zerocopy_send_client": false, 00:14:14.710 "zerocopy_threshold": 0, 00:14:14.710 "tls_version": 0, 00:14:14.710 "enable_ktls": false 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "sock_impl_set_options", 00:14:14.710 "params": { 00:14:14.710 "impl_name": "uring", 00:14:14.710 "recv_buf_size": 2097152, 00:14:14.710 "send_buf_size": 2097152, 00:14:14.710 "enable_recv_pipe": true, 00:14:14.710 "enable_quickack": false, 00:14:14.710 "enable_placement_id": 0, 00:14:14.710 "enable_zerocopy_send_server": false, 00:14:14.710 "enable_zerocopy_send_client": false, 00:14:14.710 "zerocopy_threshold": 0, 00:14:14.710 "tls_version": 0, 00:14:14.710 "enable_ktls": false 00:14:14.710 } 00:14:14.710 } 00:14:14.710 ] 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "subsystem": "vmd", 00:14:14.710 "config": [] 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "subsystem": "accel", 00:14:14.710 "config": [ 00:14:14.710 { 00:14:14.710 "method": "accel_set_options", 00:14:14.710 "params": { 00:14:14.710 "small_cache_size": 128, 00:14:14.710 "large_cache_size": 16, 00:14:14.710 "task_count": 2048, 00:14:14.710 "sequence_count": 2048, 00:14:14.710 "buf_count": 2048 00:14:14.710 } 00:14:14.710 } 00:14:14.710 ] 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "subsystem": "bdev", 00:14:14.710 "config": [ 00:14:14.710 { 00:14:14.710 "method": "bdev_set_options", 00:14:14.710 "params": { 00:14:14.710 "bdev_io_pool_size": 65535, 00:14:14.710 "bdev_io_cache_size": 256, 00:14:14.710 "bdev_auto_examine": true, 00:14:14.710 "iobuf_small_cache_size": 128, 00:14:14.710 "iobuf_large_cache_size": 16 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_raid_set_options", 00:14:14.710 "params": { 00:14:14.710 "process_window_size_kb": 1024, 00:14:14.710 "process_max_bandwidth_mb_sec": 0 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_iscsi_set_options", 00:14:14.710 "params": { 00:14:14.710 "timeout_sec": 30 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_nvme_set_options", 00:14:14.710 "params": { 00:14:14.710 "action_on_timeout": "none", 00:14:14.710 "timeout_us": 0, 00:14:14.710 "timeout_admin_us": 0, 00:14:14.710 "keep_alive_timeout_ms": 10000, 00:14:14.710 "arbitration_burst": 0, 00:14:14.710 "low_priority_weight": 0, 00:14:14.710 "medium_priority_weight": 0, 00:14:14.710 "high_priority_weight": 0, 00:14:14.710 "nvme_adminq_poll_period_us": 10000, 00:14:14.710 "nvme_ioq_poll_period_us": 0, 00:14:14.710 "io_queue_requests": 512, 00:14:14.710 "delay_cmd_submit": true, 00:14:14.710 "transport_retry_count": 4, 00:14:14.710 "bdev_retry_count": 3, 00:14:14.710 "transport_ack_timeout": 0, 00:14:14.710 "ctrlr_loss_timeout_sec": 0, 00:14:14.710 "reconnect_delay_sec": 0, 00:14:14.710 "fast_io_fail_timeout_sec": 0, 00:14:14.710 "disable_auto_failback": false, 00:14:14.710 "generate_uuids": false, 00:14:14.710 "transport_tos": 0, 00:14:14.710 "nvme_error_stat": false, 00:14:14.710 "rdma_srq_size": 0, 00:14:14.710 "io_path_stat": false, 00:14:14.710 "allow_accel_sequence": false, 00:14:14.710 "rdma_max_cq_size": 0, 00:14:14.710 "rdma_cm_event_timeout_ms": 0, 00:14:14.710 "dhchap_digests": [ 00:14:14.710 "sha256", 00:14:14.710 "sha384", 00:14:14.710 "sha512" 00:14:14.710 ], 00:14:14.710 "dhchap_dhgroups": [ 00:14:14.710 "null", 00:14:14.710 "ffdhe2048", 00:14:14.710 "ffdhe3072", 00:14:14.710 "ffdhe4096", 00:14:14.710 "ffdhe6144", 00:14:14.710 "ffdhe8192" 00:14:14.710 ], 00:14:14.710 "rdma_umr_per_io": false 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_nvme_attach_controller", 00:14:14.710 "params": { 00:14:14.710 "name": "nvme0", 00:14:14.710 "trtype": "TCP", 00:14:14.710 "adrfam": "IPv4", 00:14:14.710 "traddr": "10.0.0.3", 00:14:14.710 "trsvcid": "4420", 00:14:14.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.710 "prchk_reftag": false, 00:14:14.710 "prchk_guard": false, 00:14:14.710 "ctrlr_loss_timeout_sec": 0, 00:14:14.710 "reconnect_delay_sec": 0, 00:14:14.710 "fast_io_fail_timeout_sec": 0, 00:14:14.710 "psk": "key0", 00:14:14.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.710 "hdgst": false, 00:14:14.710 "ddgst": false, 00:14:14.710 "multipath": "multipath" 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_nvme_set_hotplug", 00:14:14.710 "params": { 00:14:14.710 "period_us": 100000, 00:14:14.710 "enable": false 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_enable_histogram", 00:14:14.710 "params": { 00:14:14.710 "name": "nvme0n1", 00:14:14.710 "enable": true 00:14:14.710 } 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "method": "bdev_wait_for_examine" 00:14:14.710 } 00:14:14.710 ] 00:14:14.710 }, 00:14:14.710 { 00:14:14.710 "subsystem": "nbd", 00:14:14.710 "config": [] 00:14:14.710 } 00:14:14.710 ] 00:14:14.710 }' 00:14:14.710 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.710 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.710 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.969 [2024-12-11 08:48:22.550922] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:14.969 [2024-12-11 08:48:22.551040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73244 ] 00:14:14.969 [2024-12-11 08:48:22.703678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.229 [2024-12-11 08:48:22.742555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.229 [2024-12-11 08:48:22.852755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.229 [2024-12-11 08:48:22.883802] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.795 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.795 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:15.795 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:15.795 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:16.055 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.055 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.313 Running I/O for 1 seconds... 00:14:17.249 3968.00 IOPS, 15.50 MiB/s 00:14:17.249 Latency(us) 00:14:17.249 [2024-12-11T08:48:25.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.249 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:17.249 Verification LBA range: start 0x0 length 0x2000 00:14:17.249 nvme0n1 : 1.03 3978.68 15.54 0.00 0.00 31841.14 6464.23 19065.02 00:14:17.249 [2024-12-11T08:48:25.023Z] =================================================================================================================== 00:14:17.249 [2024-12-11T08:48:25.023Z] Total : 3978.68 15.54 0.00 0.00 31841.14 6464.23 19065.02 00:14:17.249 { 00:14:17.249 "results": [ 00:14:17.249 { 00:14:17.249 "job": "nvme0n1", 00:14:17.249 "core_mask": "0x2", 00:14:17.249 "workload": "verify", 00:14:17.249 "status": "finished", 00:14:17.249 "verify_range": { 00:14:17.249 "start": 0, 00:14:17.249 "length": 8192 00:14:17.249 }, 00:14:17.249 "queue_depth": 128, 00:14:17.249 "io_size": 4096, 00:14:17.249 "runtime": 1.029487, 00:14:17.249 "iops": 3978.680643854658, 00:14:17.249 "mibps": 15.541721265057257, 00:14:17.249 "io_failed": 0, 00:14:17.249 "io_timeout": 0, 00:14:17.249 "avg_latency_us": 31841.138181818184, 00:14:17.249 "min_latency_us": 6464.232727272727, 00:14:17.249 "max_latency_us": 19065.01818181818 00:14:17.249 } 00:14:17.249 ], 00:14:17.249 "core_count": 1 00:14:17.249 } 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:17.249 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:17.249 nvmf_trace.0 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73244 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73244 ']' 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73244 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73244 00:14:17.507 killing process with pid 73244 00:14:17.507 Received shutdown signal, test time was about 1.000000 seconds 00:14:17.507 00:14:17.507 Latency(us) 00:14:17.507 [2024-12-11T08:48:25.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.507 [2024-12-11T08:48:25.281Z] =================================================================================================================== 00:14:17.507 [2024-12-11T08:48:25.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73244' 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73244 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73244 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.507 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.507 rmmod nvme_tcp 00:14:17.507 rmmod nvme_fabrics 00:14:17.765 rmmod nvme_keyring 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 73212 ']' 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 73212 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73212 ']' 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73212 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73212 00:14:17.765 killing process with pid 73212 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73212' 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73212 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73212 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:17.765 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:18.023 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.o9sjXPZUL1 /tmp/tmp.cFF6l94Vh0 /tmp/tmp.qzb2ejOdP9 00:14:18.023 ************************************ 00:14:18.023 END TEST nvmf_tls 00:14:18.023 00:14:18.023 real 1m19.604s 00:14:18.023 user 2m9.302s 00:14:18.023 sys 0m25.925s 00:14:18.024 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.024 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.024 ************************************ 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.283 ************************************ 00:14:18.283 START TEST nvmf_fips 00:14:18.283 ************************************ 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:18.283 * Looking for test storage... 00:14:18.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.283 --rc genhtml_branch_coverage=1 00:14:18.283 --rc genhtml_function_coverage=1 00:14:18.283 --rc genhtml_legend=1 00:14:18.283 --rc geninfo_all_blocks=1 00:14:18.283 --rc geninfo_unexecuted_blocks=1 00:14:18.283 00:14:18.283 ' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.283 --rc genhtml_branch_coverage=1 00:14:18.283 --rc genhtml_function_coverage=1 00:14:18.283 --rc genhtml_legend=1 00:14:18.283 --rc geninfo_all_blocks=1 00:14:18.283 --rc geninfo_unexecuted_blocks=1 00:14:18.283 00:14:18.283 ' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.283 --rc genhtml_branch_coverage=1 00:14:18.283 --rc genhtml_function_coverage=1 00:14:18.283 --rc genhtml_legend=1 00:14:18.283 --rc geninfo_all_blocks=1 00:14:18.283 --rc geninfo_unexecuted_blocks=1 00:14:18.283 00:14:18.283 ' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.283 --rc genhtml_branch_coverage=1 00:14:18.283 --rc genhtml_function_coverage=1 00:14:18.283 --rc genhtml_legend=1 00:14:18.283 --rc geninfo_all_blocks=1 00:14:18.283 --rc geninfo_unexecuted_blocks=1 00:14:18.283 00:14:18.283 ' 00:14:18.283 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.284 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:18.284 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:18.544 Error setting digest 00:14:18.544 40D2ED7B417F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:18.544 40D2ED7B417F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:18.544 Cannot find device "nvmf_init_br" 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:18.544 Cannot find device "nvmf_init_br2" 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:18.544 Cannot find device "nvmf_tgt_br" 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:18.544 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.544 Cannot find device "nvmf_tgt_br2" 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:18.545 Cannot find device "nvmf_init_br" 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:18.545 Cannot find device "nvmf_init_br2" 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:18.545 Cannot find device "nvmf_tgt_br" 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.545 Cannot find device "nvmf_tgt_br2" 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.545 Cannot find device "nvmf_br" 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:18.545 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.804 Cannot find device "nvmf_init_if" 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.804 Cannot find device "nvmf_init_if2" 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:18.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:18.804 00:14:18.804 --- 10.0.0.3 ping statistics --- 00:14:18.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.804 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:18.804 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:18.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:18.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:14:18.804 00:14:18.804 --- 10.0.0.4 ping statistics --- 00:14:18.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.805 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:18.805 00:14:18.805 --- 10.0.0.1 ping statistics --- 00:14:18.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.805 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:18.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:14:18.805 00:14:18.805 --- 10.0.0.2 ping statistics --- 00:14:18.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.805 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.805 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73556 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73556 00:14:19.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73556 ']' 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.063 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.063 [2024-12-11 08:48:26.677290] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:19.063 [2024-12-11 08:48:26.677625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.063 [2024-12-11 08:48:26.827631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.322 [2024-12-11 08:48:26.867566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.322 [2024-12-11 08:48:26.867625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.322 [2024-12-11 08:48:26.867638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.322 [2024-12-11 08:48:26.867648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.322 [2024-12-11 08:48:26.867657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.322 [2024-12-11 08:48:26.867979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.322 [2024-12-11 08:48:26.902230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.V4V 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:19.322 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.V4V 00:14:19.322 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.V4V 00:14:19.322 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.V4V 00:14:19.322 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.580 [2024-12-11 08:48:27.288326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.580 [2024-12-11 08:48:27.304260] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.580 [2024-12-11 08:48:27.304450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.580 malloc0 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73590 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73590 /var/tmp/bdevperf.sock 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73590 ']' 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.839 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.839 [2024-12-11 08:48:27.450890] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:19.839 [2024-12-11 08:48:27.450990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73590 ] 00:14:19.839 [2024-12-11 08:48:27.601459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.098 [2024-12-11 08:48:27.640202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.098 [2024-12-11 08:48:27.673490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.098 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.098 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:20.098 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.V4V 00:14:20.357 08:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.615 [2024-12-11 08:48:28.244432] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.615 TLSTESTn1 00:14:20.615 08:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.874 Running I/O for 10 seconds... 00:14:22.769 4096.00 IOPS, 16.00 MiB/s [2024-12-11T08:48:31.480Z] 4144.50 IOPS, 16.19 MiB/s [2024-12-11T08:48:32.858Z] 4196.00 IOPS, 16.39 MiB/s [2024-12-11T08:48:33.794Z] 4199.00 IOPS, 16.40 MiB/s [2024-12-11T08:48:34.730Z] 4222.20 IOPS, 16.49 MiB/s [2024-12-11T08:48:35.666Z] 4251.50 IOPS, 16.61 MiB/s [2024-12-11T08:48:36.603Z] 4276.14 IOPS, 16.70 MiB/s [2024-12-11T08:48:37.541Z] 4283.38 IOPS, 16.73 MiB/s [2024-12-11T08:48:38.477Z] 4292.56 IOPS, 16.77 MiB/s [2024-12-11T08:48:38.736Z] 4301.50 IOPS, 16.80 MiB/s 00:14:30.962 Latency(us) 00:14:30.962 [2024-12-11T08:48:38.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.962 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:30.962 Verification LBA range: start 0x0 length 0x2000 00:14:30.962 TLSTESTn1 : 10.02 4306.89 16.82 0.00 0.00 29664.59 6166.34 26095.24 00:14:30.962 [2024-12-11T08:48:38.736Z] =================================================================================================================== 00:14:30.962 [2024-12-11T08:48:38.736Z] Total : 4306.89 16.82 0.00 0.00 29664.59 6166.34 26095.24 00:14:30.962 { 00:14:30.962 "results": [ 00:14:30.962 { 00:14:30.962 "job": "TLSTESTn1", 00:14:30.962 "core_mask": "0x4", 00:14:30.962 "workload": "verify", 00:14:30.962 "status": "finished", 00:14:30.962 "verify_range": { 00:14:30.962 "start": 0, 00:14:30.962 "length": 8192 00:14:30.962 }, 00:14:30.962 "queue_depth": 128, 00:14:30.962 "io_size": 4096, 00:14:30.962 "runtime": 10.016732, 00:14:30.962 "iops": 4306.8937054520375, 00:14:30.962 "mibps": 16.82380353692202, 00:14:30.962 "io_failed": 0, 00:14:30.962 "io_timeout": 0, 00:14:30.962 "avg_latency_us": 29664.593141390495, 00:14:30.962 "min_latency_us": 6166.341818181818, 00:14:30.962 "max_latency_us": 26095.243636363637 00:14:30.962 } 00:14:30.962 ], 00:14:30.962 "core_count": 1 00:14:30.962 } 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:30.962 nvmf_trace.0 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73590 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73590 ']' 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73590 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73590 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73590' 00:14:30.962 killing process with pid 73590 00:14:30.962 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73590 00:14:30.962 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.962 00:14:30.962 Latency(us) 00:14:30.962 [2024-12-11T08:48:38.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.962 [2024-12-11T08:48:38.736Z] =================================================================================================================== 00:14:30.962 [2024-12-11T08:48:38.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.963 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73590 00:14:31.221 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:31.221 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:31.221 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:31.221 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.221 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.222 rmmod nvme_tcp 00:14:31.222 rmmod nvme_fabrics 00:14:31.222 rmmod nvme_keyring 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73556 ']' 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73556 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73556 ']' 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73556 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73556 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:31.222 killing process with pid 73556 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73556' 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73556 00:14:31.222 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73556 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.481 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.V4V 00:14:31.741 ************************************ 00:14:31.741 END TEST nvmf_fips 00:14:31.741 ************************************ 00:14:31.741 00:14:31.741 real 0m13.459s 00:14:31.741 user 0m18.394s 00:14:31.741 sys 0m5.662s 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.741 ************************************ 00:14:31.741 START TEST nvmf_control_msg_list 00:14:31.741 ************************************ 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:31.741 * Looking for test storage... 00:14:31.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.741 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.742 --rc genhtml_branch_coverage=1 00:14:31.742 --rc genhtml_function_coverage=1 00:14:31.742 --rc genhtml_legend=1 00:14:31.742 --rc geninfo_all_blocks=1 00:14:31.742 --rc geninfo_unexecuted_blocks=1 00:14:31.742 00:14:31.742 ' 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.742 --rc genhtml_branch_coverage=1 00:14:31.742 --rc genhtml_function_coverage=1 00:14:31.742 --rc genhtml_legend=1 00:14:31.742 --rc geninfo_all_blocks=1 00:14:31.742 --rc geninfo_unexecuted_blocks=1 00:14:31.742 00:14:31.742 ' 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.742 --rc genhtml_branch_coverage=1 00:14:31.742 --rc genhtml_function_coverage=1 00:14:31.742 --rc genhtml_legend=1 00:14:31.742 --rc geninfo_all_blocks=1 00:14:31.742 --rc geninfo_unexecuted_blocks=1 00:14:31.742 00:14:31.742 ' 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.742 --rc genhtml_branch_coverage=1 00:14:31.742 --rc genhtml_function_coverage=1 00:14:31.742 --rc genhtml_legend=1 00:14:31.742 --rc geninfo_all_blocks=1 00:14:31.742 --rc geninfo_unexecuted_blocks=1 00:14:31.742 00:14:31.742 ' 00:14:31.742 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:32.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:32.002 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:32.003 Cannot find device "nvmf_init_br" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:32.003 Cannot find device "nvmf_init_br2" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:32.003 Cannot find device "nvmf_tgt_br" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.003 Cannot find device "nvmf_tgt_br2" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:32.003 Cannot find device "nvmf_init_br" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:32.003 Cannot find device "nvmf_init_br2" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:32.003 Cannot find device "nvmf_tgt_br" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:32.003 Cannot find device "nvmf_tgt_br2" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:32.003 Cannot find device "nvmf_br" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:32.003 Cannot find device "nvmf_init_if" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:32.003 Cannot find device "nvmf_init_if2" 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.003 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.262 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:32.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:14:32.263 00:14:32.263 --- 10.0.0.3 ping statistics --- 00:14:32.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.263 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:32.263 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:32.263 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:14:32.263 00:14:32.263 --- 10.0.0.4 ping statistics --- 00:14:32.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.263 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:32.263 00:14:32.263 --- 10.0.0.1 ping statistics --- 00:14:32.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.263 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:32.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:14:32.263 00:14:32.263 --- 10.0.0.2 ping statistics --- 00:14:32.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.263 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73967 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73967 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73967 ']' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.263 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.522 [2024-12-11 08:48:40.050108] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:32.522 [2024-12-11 08:48:40.050223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.522 [2024-12-11 08:48:40.190862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.522 [2024-12-11 08:48:40.219618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.522 [2024-12-11 08:48:40.219910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.522 [2024-12-11 08:48:40.219946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.522 [2024-12-11 08:48:40.219954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.522 [2024-12-11 08:48:40.219961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.522 [2024-12-11 08:48:40.220284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.522 [2024-12-11 08:48:40.247932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 [2024-12-11 08:48:40.342703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 Malloc0 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.781 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.782 [2024-12-11 08:48:40.381431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73996 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73997 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73998 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:32.782 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73996 00:14:33.040 [2024-12-11 08:48:40.575737] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.040 [2024-12-11 08:48:40.585801] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.040 [2024-12-11 08:48:40.596002] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.978 Initializing NVMe Controllers 00:14:33.978 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:33.978 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:33.978 Initialization complete. Launching workers. 00:14:33.978 ======================================================== 00:14:33.978 Latency(us) 00:14:33.978 Device Information : IOPS MiB/s Average min max 00:14:33.978 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3401.00 13.29 293.68 127.12 708.30 00:14:33.978 ======================================================== 00:14:33.978 Total : 3401.00 13.29 293.68 127.12 708.30 00:14:33.978 00:14:33.978 Initializing NVMe Controllers 00:14:33.978 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:33.978 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:33.978 Initialization complete. Launching workers. 00:14:33.978 ======================================================== 00:14:33.978 Latency(us) 00:14:33.978 Device Information : IOPS MiB/s Average min max 00:14:33.978 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3390.98 13.25 294.56 157.10 562.17 00:14:33.978 ======================================================== 00:14:33.978 Total : 3390.98 13.25 294.56 157.10 562.17 00:14:33.978 00:14:33.978 Initializing NVMe Controllers 00:14:33.978 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:33.978 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:33.978 Initialization complete. Launching workers. 00:14:33.978 ======================================================== 00:14:33.978 Latency(us) 00:14:33.978 Device Information : IOPS MiB/s Average min max 00:14:33.978 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3418.00 13.35 292.15 113.92 561.07 00:14:33.978 ======================================================== 00:14:33.978 Total : 3418.00 13.35 292.15 113.92 561.07 00:14:33.978 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73997 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73998 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.978 rmmod nvme_tcp 00:14:33.978 rmmod nvme_fabrics 00:14:33.978 rmmod nvme_keyring 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:33.978 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73967 ']' 00:14:33.979 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73967 00:14:33.979 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73967 ']' 00:14:33.979 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73967 00:14:33.979 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73967 00:14:34.238 killing process with pid 73967 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73967' 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73967 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73967 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:34.238 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:34.238 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:34.238 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:34.498 00:14:34.498 real 0m2.860s 00:14:34.498 user 0m4.749s 00:14:34.498 sys 0m1.262s 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.498 ************************************ 00:14:34.498 END TEST nvmf_control_msg_list 00:14:34.498 ************************************ 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.498 ************************************ 00:14:34.498 START TEST nvmf_wait_for_buf 00:14:34.498 ************************************ 00:14:34.498 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:34.758 * Looking for test storage... 00:14:34.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:34.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.758 --rc genhtml_branch_coverage=1 00:14:34.758 --rc genhtml_function_coverage=1 00:14:34.758 --rc genhtml_legend=1 00:14:34.758 --rc geninfo_all_blocks=1 00:14:34.758 --rc geninfo_unexecuted_blocks=1 00:14:34.758 00:14:34.758 ' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:34.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.758 --rc genhtml_branch_coverage=1 00:14:34.758 --rc genhtml_function_coverage=1 00:14:34.758 --rc genhtml_legend=1 00:14:34.758 --rc geninfo_all_blocks=1 00:14:34.758 --rc geninfo_unexecuted_blocks=1 00:14:34.758 00:14:34.758 ' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:34.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.758 --rc genhtml_branch_coverage=1 00:14:34.758 --rc genhtml_function_coverage=1 00:14:34.758 --rc genhtml_legend=1 00:14:34.758 --rc geninfo_all_blocks=1 00:14:34.758 --rc geninfo_unexecuted_blocks=1 00:14:34.758 00:14:34.758 ' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:34.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.758 --rc genhtml_branch_coverage=1 00:14:34.758 --rc genhtml_function_coverage=1 00:14:34.758 --rc genhtml_legend=1 00:14:34.758 --rc geninfo_all_blocks=1 00:14:34.758 --rc geninfo_unexecuted_blocks=1 00:14:34.758 00:14:34.758 ' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.758 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.759 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:34.759 Cannot find device "nvmf_init_br" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:34.759 Cannot find device "nvmf_init_br2" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:34.759 Cannot find device "nvmf_tgt_br" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.759 Cannot find device "nvmf_tgt_br2" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:34.759 Cannot find device "nvmf_init_br" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:34.759 Cannot find device "nvmf_init_br2" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:34.759 Cannot find device "nvmf_tgt_br" 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:34.759 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:35.018 Cannot find device "nvmf_tgt_br2" 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:35.018 Cannot find device "nvmf_br" 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:35.018 Cannot find device "nvmf_init_if" 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:35.018 Cannot find device "nvmf_init_if2" 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.018 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:35.019 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.019 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:35.019 00:14:35.019 --- 10.0.0.3 ping statistics --- 00:14:35.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.019 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:35.019 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:35.019 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:14:35.019 00:14:35.019 --- 10.0.0.4 ping statistics --- 00:14:35.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.019 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:35.019 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:35.278 00:14:35.278 --- 10.0.0.1 ping statistics --- 00:14:35.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.278 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:35.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:35.278 00:14:35.278 --- 10.0.0.2 ping statistics --- 00:14:35.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.278 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=74227 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 74227 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 74227 ']' 00:14:35.278 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.279 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.279 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.279 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.279 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.279 [2024-12-11 08:48:42.877116] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:35.279 [2024-12-11 08:48:42.877402] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.279 [2024-12-11 08:48:43.015978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.279 [2024-12-11 08:48:43.046585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.279 [2024-12-11 08:48:43.046870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.279 [2024-12-11 08:48:43.047019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.279 [2024-12-11 08:48:43.047182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.279 [2024-12-11 08:48:43.047199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.279 [2024-12-11 08:48:43.047511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 [2024-12-11 08:48:43.181271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 Malloc0 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 [2024-12-11 08:48:43.229419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 [2024-12-11 08:48:43.253484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.538 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:35.797 [2024-12-11 08:48:43.448364] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:37.188 Initializing NVMe Controllers 00:14:37.188 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:37.188 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:37.188 Initialization complete. Launching workers. 00:14:37.188 ======================================================== 00:14:37.188 Latency(us) 00:14:37.188 Device Information : IOPS MiB/s Average min max 00:14:37.189 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.00 62.50 8000.61 7065.59 11102.01 00:14:37.189 ======================================================== 00:14:37.189 Total : 500.00 62.50 8000.61 7065.59 11102.01 00:14:37.189 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.189 rmmod nvme_tcp 00:14:37.189 rmmod nvme_fabrics 00:14:37.189 rmmod nvme_keyring 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 74227 ']' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 74227 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 74227 ']' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 74227 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74227 00:14:37.189 killing process with pid 74227 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74227' 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 74227 00:14:37.189 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 74227 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:37.448 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:37.706 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.706 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.706 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:37.707 00:14:37.707 real 0m3.075s 00:14:37.707 user 0m2.456s 00:14:37.707 sys 0m0.707s 00:14:37.707 ************************************ 00:14:37.707 END TEST nvmf_wait_for_buf 00:14:37.707 ************************************ 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.707 ************************************ 00:14:37.707 START TEST nvmf_nsid 00:14:37.707 ************************************ 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:37.707 * Looking for test storage... 00:14:37.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:37.707 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:37.966 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.967 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:37.968 Cannot find device "nvmf_init_br" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:37.968 Cannot find device "nvmf_init_br2" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:37.968 Cannot find device "nvmf_tgt_br" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.968 Cannot find device "nvmf_tgt_br2" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:37.968 Cannot find device "nvmf_init_br" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:37.968 Cannot find device "nvmf_init_br2" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:37.968 Cannot find device "nvmf_tgt_br" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:37.968 Cannot find device "nvmf_tgt_br2" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:37.968 Cannot find device "nvmf_br" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:37.968 Cannot find device "nvmf_init_if" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:37.968 Cannot find device "nvmf_init_if2" 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.968 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:38.227 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:38.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:14:38.228 00:14:38.228 --- 10.0.0.3 ping statistics --- 00:14:38.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.228 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:38.228 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:38.228 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:38.228 00:14:38.228 --- 10.0.0.4 ping statistics --- 00:14:38.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.228 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:38.228 00:14:38.228 --- 10.0.0.1 ping statistics --- 00:14:38.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.228 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:38.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:14:38.228 00:14:38.228 --- 10.0.0.2 ping statistics --- 00:14:38.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.228 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74480 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74480 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74480 ']' 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.228 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:38.487 [2024-12-11 08:48:46.018990] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:38.487 [2024-12-11 08:48:46.019877] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.487 [2024-12-11 08:48:46.165294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.487 [2024-12-11 08:48:46.195729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.487 [2024-12-11 08:48:46.195779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.487 [2024-12-11 08:48:46.195807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.487 [2024-12-11 08:48:46.195815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.487 [2024-12-11 08:48:46.195822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.487 [2024-12-11 08:48:46.196138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.488 [2024-12-11 08:48:46.225245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74510 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=87f571a0-6792-417f-8827-8ae305a98165 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=afb4c7a4-d510-4b0b-b1b3-23767fc2ea70 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6623cff0-d687-443c-8446-0d39b0ecf0a9 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 null0 00:14:38.747 null1 00:14:38.747 null2 00:14:38.747 [2024-12-11 08:48:46.372529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.747 [2024-12-11 08:48:46.384912] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:38.747 [2024-12-11 08:48:46.385504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74510 ] 00:14:38.747 [2024-12-11 08:48:46.396705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74510 /var/tmp/tgt2.sock 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74510 ']' 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.747 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:39.006 [2024-12-11 08:48:46.535098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.006 [2024-12-11 08:48:46.574473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.006 [2024-12-11 08:48:46.618937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.006 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.006 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:39.006 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:39.573 [2024-12-11 08:48:47.109210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.573 [2024-12-11 08:48:47.125321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:39.573 nvme0n1 nvme0n2 00:14:39.573 nvme1n1 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:39.573 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 87f571a0-6792-417f-8827-8ae305a98165 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=87f571a06792417f88278ae305a98165 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 87F571A06792417F88278AE305A98165 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 87F571A06792417F88278AE305A98165 == \8\7\F\5\7\1\A\0\6\7\9\2\4\1\7\F\8\8\2\7\8\A\E\3\0\5\A\9\8\1\6\5 ]] 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid afb4c7a4-d510-4b0b-b1b3-23767fc2ea70 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=afb4c7a4d5104b0bb1b323767fc2ea70 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AFB4C7A4D5104B0BB1B323767FC2EA70 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AFB4C7A4D5104B0BB1B323767FC2EA70 == \A\F\B\4\C\7\A\4\D\5\1\0\4\B\0\B\B\1\B\3\2\3\7\6\7\F\C\2\E\A\7\0 ]] 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6623cff0-d687-443c-8446-0d39b0ecf0a9 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6623cff0d687443c84460d39b0ecf0a9 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6623CFF0D687443C84460D39B0ECF0A9 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6623CFF0D687443C84460D39B0ECF0A9 == \6\6\2\3\C\F\F\0\D\6\8\7\4\4\3\C\8\4\4\6\0\D\3\9\B\0\E\C\F\0\A\9 ]] 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74510 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74510 ']' 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74510 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.951 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74510 00:14:41.210 killing process with pid 74510 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74510' 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74510 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74510 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:41.210 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:41.469 rmmod nvme_tcp 00:14:41.469 rmmod nvme_fabrics 00:14:41.469 rmmod nvme_keyring 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74480 ']' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74480 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74480 ']' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74480 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74480 00:14:41.469 killing process with pid 74480 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74480' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74480 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74480 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:41.469 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.728 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.986 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:41.986 00:14:41.987 real 0m4.167s 00:14:41.987 user 0m6.126s 00:14:41.987 sys 0m1.452s 00:14:41.987 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.987 ************************************ 00:14:41.987 END TEST nvmf_nsid 00:14:41.987 ************************************ 00:14:41.987 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 08:48:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:41.987 ************************************ 00:14:41.987 END TEST nvmf_target_extra 00:14:41.987 ************************************ 00:14:41.987 00:14:41.987 real 4m56.840s 00:14:41.987 user 10m25.256s 00:14:41.987 sys 1m6.234s 00:14:41.987 08:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.987 08:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 08:48:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:41.987 08:48:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.987 08:48:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.987 08:48:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 ************************************ 00:14:41.987 START TEST nvmf_host 00:14:41.987 ************************************ 00:14:41.987 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:41.987 * Looking for test storage... 00:14:41.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:41.987 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:41.987 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:41.987 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:42.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.245 --rc genhtml_branch_coverage=1 00:14:42.245 --rc genhtml_function_coverage=1 00:14:42.245 --rc genhtml_legend=1 00:14:42.245 --rc geninfo_all_blocks=1 00:14:42.245 --rc geninfo_unexecuted_blocks=1 00:14:42.245 00:14:42.245 ' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:42.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.245 --rc genhtml_branch_coverage=1 00:14:42.245 --rc genhtml_function_coverage=1 00:14:42.245 --rc genhtml_legend=1 00:14:42.245 --rc geninfo_all_blocks=1 00:14:42.245 --rc geninfo_unexecuted_blocks=1 00:14:42.245 00:14:42.245 ' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:42.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.245 --rc genhtml_branch_coverage=1 00:14:42.245 --rc genhtml_function_coverage=1 00:14:42.245 --rc genhtml_legend=1 00:14:42.245 --rc geninfo_all_blocks=1 00:14:42.245 --rc geninfo_unexecuted_blocks=1 00:14:42.245 00:14:42.245 ' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:42.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.245 --rc genhtml_branch_coverage=1 00:14:42.245 --rc genhtml_function_coverage=1 00:14:42.245 --rc genhtml_legend=1 00:14:42.245 --rc geninfo_all_blocks=1 00:14:42.245 --rc geninfo_unexecuted_blocks=1 00:14:42.245 00:14:42.245 ' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.245 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:42.245 ************************************ 00:14:42.245 START TEST nvmf_identify 00:14:42.245 ************************************ 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:42.245 * Looking for test storage... 00:14:42.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:14:42.245 08:48:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.245 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:42.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.505 --rc genhtml_branch_coverage=1 00:14:42.505 --rc genhtml_function_coverage=1 00:14:42.505 --rc genhtml_legend=1 00:14:42.505 --rc geninfo_all_blocks=1 00:14:42.505 --rc geninfo_unexecuted_blocks=1 00:14:42.505 00:14:42.505 ' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:42.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.505 --rc genhtml_branch_coverage=1 00:14:42.505 --rc genhtml_function_coverage=1 00:14:42.505 --rc genhtml_legend=1 00:14:42.505 --rc geninfo_all_blocks=1 00:14:42.505 --rc geninfo_unexecuted_blocks=1 00:14:42.505 00:14:42.505 ' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:42.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.505 --rc genhtml_branch_coverage=1 00:14:42.505 --rc genhtml_function_coverage=1 00:14:42.505 --rc genhtml_legend=1 00:14:42.505 --rc geninfo_all_blocks=1 00:14:42.505 --rc geninfo_unexecuted_blocks=1 00:14:42.505 00:14:42.505 ' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:42.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.505 --rc genhtml_branch_coverage=1 00:14:42.505 --rc genhtml_function_coverage=1 00:14:42.505 --rc genhtml_legend=1 00:14:42.505 --rc geninfo_all_blocks=1 00:14:42.505 --rc geninfo_unexecuted_blocks=1 00:14:42.505 00:14:42.505 ' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.505 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.506 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:42.506 Cannot find device "nvmf_init_br" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:42.506 Cannot find device "nvmf_init_br2" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:42.506 Cannot find device "nvmf_tgt_br" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.506 Cannot find device "nvmf_tgt_br2" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:42.506 Cannot find device "nvmf_init_br" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:42.506 Cannot find device "nvmf_init_br2" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:42.506 Cannot find device "nvmf_tgt_br" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:42.506 Cannot find device "nvmf_tgt_br2" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:42.506 Cannot find device "nvmf_br" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:42.506 Cannot find device "nvmf_init_if" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:42.506 Cannot find device "nvmf_init_if2" 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.506 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:42.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:42.766 00:14:42.766 --- 10.0.0.3 ping statistics --- 00:14:42.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.766 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:42.766 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:42.766 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:14:42.766 00:14:42.766 --- 10.0.0.4 ping statistics --- 00:14:42.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.766 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:42.766 00:14:42.766 --- 10.0.0.1 ping statistics --- 00:14:42.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.766 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:42.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:42.766 00:14:42.766 --- 10.0.0.2 ping statistics --- 00:14:42.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.766 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74860 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74860 00:14:42.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74860 ']' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.766 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.025 [2024-12-11 08:48:50.560379] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:43.025 [2024-12-11 08:48:50.560672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.025 [2024-12-11 08:48:50.712051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.025 [2024-12-11 08:48:50.744243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.025 [2024-12-11 08:48:50.744460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.025 [2024-12-11 08:48:50.744621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.025 [2024-12-11 08:48:50.744634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.025 [2024-12-11 08:48:50.744641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.025 [2024-12-11 08:48:50.749207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.025 [2024-12-11 08:48:50.749341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.025 [2024-12-11 08:48:50.749418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.025 [2024-12-11 08:48:50.749529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.025 [2024-12-11 08:48:50.779226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 [2024-12-11 08:48:50.840288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 Malloc0 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 [2024-12-11 08:48:50.954653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 [ 00:14:43.284 { 00:14:43.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:43.284 "subtype": "Discovery", 00:14:43.284 "listen_addresses": [ 00:14:43.284 { 00:14:43.284 "trtype": "TCP", 00:14:43.284 "adrfam": "IPv4", 00:14:43.284 "traddr": "10.0.0.3", 00:14:43.284 "trsvcid": "4420" 00:14:43.284 } 00:14:43.284 ], 00:14:43.285 "allow_any_host": true, 00:14:43.285 "hosts": [] 00:14:43.285 }, 00:14:43.285 { 00:14:43.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.285 "subtype": "NVMe", 00:14:43.285 "listen_addresses": [ 00:14:43.285 { 00:14:43.285 "trtype": "TCP", 00:14:43.285 "adrfam": "IPv4", 00:14:43.285 "traddr": "10.0.0.3", 00:14:43.285 "trsvcid": "4420" 00:14:43.285 } 00:14:43.285 ], 00:14:43.285 "allow_any_host": true, 00:14:43.285 "hosts": [], 00:14:43.285 "serial_number": "SPDK00000000000001", 00:14:43.285 "model_number": "SPDK bdev Controller", 00:14:43.285 "max_namespaces": 32, 00:14:43.285 "min_cntlid": 1, 00:14:43.285 "max_cntlid": 65519, 00:14:43.285 "namespaces": [ 00:14:43.285 { 00:14:43.285 "nsid": 1, 00:14:43.285 "bdev_name": "Malloc0", 00:14:43.285 "name": "Malloc0", 00:14:43.285 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:43.285 "eui64": "ABCDEF0123456789", 00:14:43.285 "uuid": "fd609558-3d23-4e70-a1a6-59251796e1a1" 00:14:43.285 } 00:14:43.285 ] 00:14:43.285 } 00:14:43.285 ] 00:14:43.285 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.285 08:48:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:43.285 [2024-12-11 08:48:51.017435] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:43.285 [2024-12-11 08:48:51.017485] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74887 ] 00:14:43.546 [2024-12-11 08:48:51.176279] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:43.546 [2024-12-11 08:48:51.176341] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:43.546 [2024-12-11 08:48:51.176348] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:43.546 [2024-12-11 08:48:51.176359] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:43.546 [2024-12-11 08:48:51.176369] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:43.546 [2024-12-11 08:48:51.176625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:43.546 [2024-12-11 08:48:51.176687] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1981750 0 00:14:43.546 [2024-12-11 08:48:51.183155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:43.546 [2024-12-11 08:48:51.183182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:43.546 [2024-12-11 08:48:51.183189] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:43.546 [2024-12-11 08:48:51.183193] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:43.546 [2024-12-11 08:48:51.183228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.546 [2024-12-11 08:48:51.183236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.546 [2024-12-11 08:48:51.183240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.546 [2024-12-11 08:48:51.183254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:43.546 [2024-12-11 08:48:51.183286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.546 [2024-12-11 08:48:51.194196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.546 [2024-12-11 08:48:51.194220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.546 [2024-12-11 08:48:51.194226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.546 [2024-12-11 08:48:51.194232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.546 [2024-12-11 08:48:51.194244] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:43.546 [2024-12-11 08:48:51.194253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:43.546 [2024-12-11 08:48:51.194260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:43.546 [2024-12-11 08:48:51.194279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.546 [2024-12-11 08:48:51.194285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.546 [2024-12-11 08:48:51.194290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.546 [2024-12-11 08:48:51.194300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.546 [2024-12-11 08:48:51.194329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.194392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.194399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.194403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.194418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:43.547 [2024-12-11 08:48:51.194427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:43.547 [2024-12-11 08:48:51.194435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.194452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.547 [2024-12-11 08:48:51.194472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.194516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.194524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.194528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.194538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:43.547 [2024-12-11 08:48:51.194547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:43.547 [2024-12-11 08:48:51.194555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.194571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.547 [2024-12-11 08:48:51.194589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.194637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.194644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.194648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.194658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:43.547 [2024-12-11 08:48:51.194669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.194686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.547 [2024-12-11 08:48:51.194703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.194748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.194755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.194758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.194768] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:43.547 [2024-12-11 08:48:51.194774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:43.547 [2024-12-11 08:48:51.194782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:43.547 [2024-12-11 08:48:51.194893] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:43.547 [2024-12-11 08:48:51.194899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:43.547 [2024-12-11 08:48:51.194909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.194918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.194926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.547 [2024-12-11 08:48:51.194945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.194989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.194996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.195000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.195010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:43.547 [2024-12-11 08:48:51.195020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.195037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.547 [2024-12-11 08:48:51.195064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.195116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.195123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.195127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.195159] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:43.547 [2024-12-11 08:48:51.195165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:43.547 [2024-12-11 08:48:51.195175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:43.547 [2024-12-11 08:48:51.195186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:43.547 [2024-12-11 08:48:51.195208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.195221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.547 [2024-12-11 08:48:51.195242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.195331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.547 [2024-12-11 08:48:51.195338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.547 [2024-12-11 08:48:51.195342] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195346] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981750): datao=0, datal=4096, cccid=0 00:14:43.547 [2024-12-11 08:48:51.195352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e5740) on tqpair(0x1981750): expected_datao=0, payload_size=4096 00:14:43.547 [2024-12-11 08:48:51.195357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195365] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195370] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.195386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.547 [2024-12-11 08:48:51.195390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.547 [2024-12-11 08:48:51.195403] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:43.547 [2024-12-11 08:48:51.195409] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:43.547 [2024-12-11 08:48:51.195414] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:43.547 [2024-12-11 08:48:51.195419] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:43.547 [2024-12-11 08:48:51.195424] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:43.547 [2024-12-11 08:48:51.195430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:43.547 [2024-12-11 08:48:51.195439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:43.547 [2024-12-11 08:48:51.195447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.547 [2024-12-11 08:48:51.195456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.547 [2024-12-11 08:48:51.195464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:43.547 [2024-12-11 08:48:51.195483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.547 [2024-12-11 08:48:51.195539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.547 [2024-12-11 08:48:51.195546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.548 [2024-12-11 08:48:51.195550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.548 [2024-12-11 08:48:51.195562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.195578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.548 [2024-12-11 08:48:51.195585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.195600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.548 [2024-12-11 08:48:51.195607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.195622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.548 [2024-12-11 08:48:51.195628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.195643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.548 [2024-12-11 08:48:51.195649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:43.548 [2024-12-11 08:48:51.195663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:43.548 [2024-12-11 08:48:51.195671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.195683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.548 [2024-12-11 08:48:51.195703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5740, cid 0, qid 0 00:14:43.548 [2024-12-11 08:48:51.195711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e58c0, cid 1, qid 0 00:14:43.548 [2024-12-11 08:48:51.195716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5a40, cid 2, qid 0 00:14:43.548 [2024-12-11 08:48:51.195721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.548 [2024-12-11 08:48:51.195727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5d40, cid 4, qid 0 00:14:43.548 [2024-12-11 08:48:51.195811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.548 [2024-12-11 08:48:51.195818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.548 [2024-12-11 08:48:51.195822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5d40) on tqpair=0x1981750 00:14:43.548 [2024-12-11 08:48:51.195833] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:43.548 [2024-12-11 08:48:51.195838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:43.548 [2024-12-11 08:48:51.195850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.195863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.548 [2024-12-11 08:48:51.195881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5d40, cid 4, qid 0 00:14:43.548 [2024-12-11 08:48:51.195940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.548 [2024-12-11 08:48:51.195947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.548 [2024-12-11 08:48:51.195951] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981750): datao=0, datal=4096, cccid=4 00:14:43.548 [2024-12-11 08:48:51.195960] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e5d40) on tqpair(0x1981750): expected_datao=0, payload_size=4096 00:14:43.548 [2024-12-11 08:48:51.195965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195973] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195978] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.195987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.548 [2024-12-11 08:48:51.195993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.548 [2024-12-11 08:48:51.195997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5d40) on tqpair=0x1981750 00:14:43.548 [2024-12-11 08:48:51.196015] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:43.548 [2024-12-11 08:48:51.196046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.196060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.548 [2024-12-11 08:48:51.196068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.196084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.548 [2024-12-11 08:48:51.196108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5d40, cid 4, qid 0 00:14:43.548 [2024-12-11 08:48:51.196116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5ec0, cid 5, qid 0 00:14:43.548 [2024-12-11 08:48:51.196243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.548 [2024-12-11 08:48:51.196252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.548 [2024-12-11 08:48:51.196256] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196260] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981750): datao=0, datal=1024, cccid=4 00:14:43.548 [2024-12-11 08:48:51.196265] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e5d40) on tqpair(0x1981750): expected_datao=0, payload_size=1024 00:14:43.548 [2024-12-11 08:48:51.196270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196277] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196282] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.548 [2024-12-11 08:48:51.196294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.548 [2024-12-11 08:48:51.196298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5ec0) on tqpair=0x1981750 00:14:43.548 [2024-12-11 08:48:51.196321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.548 [2024-12-11 08:48:51.196329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.548 [2024-12-11 08:48:51.196333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5d40) on tqpair=0x1981750 00:14:43.548 [2024-12-11 08:48:51.196350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.196363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.548 [2024-12-11 08:48:51.196388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5d40, cid 4, qid 0 00:14:43.548 [2024-12-11 08:48:51.196459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.548 [2024-12-11 08:48:51.196466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.548 [2024-12-11 08:48:51.196470] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196474] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981750): datao=0, datal=3072, cccid=4 00:14:43.548 [2024-12-11 08:48:51.196479] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e5d40) on tqpair(0x1981750): expected_datao=0, payload_size=3072 00:14:43.548 [2024-12-11 08:48:51.196484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196491] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196496] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.548 [2024-12-11 08:48:51.196511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.548 [2024-12-11 08:48:51.196515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5d40) on tqpair=0x1981750 00:14:43.548 [2024-12-11 08:48:51.196529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981750) 00:14:43.548 [2024-12-11 08:48:51.196542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.548 [2024-12-11 08:48:51.196564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5d40, cid 4, qid 0 00:14:43.548 [2024-12-11 08:48:51.196633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.548 [2024-12-11 08:48:51.196640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.548 [2024-12-11 08:48:51.196644] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196648] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981750): datao=0, datal=8, cccid=4 00:14:43.548 [2024-12-11 08:48:51.196653] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e5d40) on tqpair(0x1981750): expected_datao=0, payload_size=8 00:14:43.548 [2024-12-11 08:48:51.196658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196665] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196669] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.548 [2024-12-11 08:48:51.196684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.548 [2024-12-11 08:48:51.196691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.549 [2024-12-11 08:48:51.196695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.549 [2024-12-11 08:48:51.196699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5d40) on tqpair=0x1981750 00:14:43.549 ===================================================== 00:14:43.549 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:43.549 ===================================================== 00:14:43.549 Controller Capabilities/Features 00:14:43.549 ================================ 00:14:43.549 Vendor ID: 0000 00:14:43.549 Subsystem Vendor ID: 0000 00:14:43.549 Serial Number: .................... 00:14:43.549 Model Number: ........................................ 00:14:43.549 Firmware Version: 25.01 00:14:43.549 Recommended Arb Burst: 0 00:14:43.549 IEEE OUI Identifier: 00 00 00 00:14:43.549 Multi-path I/O 00:14:43.549 May have multiple subsystem ports: No 00:14:43.549 May have multiple controllers: No 00:14:43.549 Associated with SR-IOV VF: No 00:14:43.549 Max Data Transfer Size: 131072 00:14:43.549 Max Number of Namespaces: 0 00:14:43.549 Max Number of I/O Queues: 1024 00:14:43.549 NVMe Specification Version (VS): 1.3 00:14:43.549 NVMe Specification Version (Identify): 1.3 00:14:43.549 Maximum Queue Entries: 128 00:14:43.549 Contiguous Queues Required: Yes 00:14:43.549 Arbitration Mechanisms Supported 00:14:43.549 Weighted Round Robin: Not Supported 00:14:43.549 Vendor Specific: Not Supported 00:14:43.549 Reset Timeout: 15000 ms 00:14:43.549 Doorbell Stride: 4 bytes 00:14:43.549 NVM Subsystem Reset: Not Supported 00:14:43.549 Command Sets Supported 00:14:43.549 NVM Command Set: Supported 00:14:43.549 Boot Partition: Not Supported 00:14:43.549 Memory Page Size Minimum: 4096 bytes 00:14:43.549 Memory Page Size Maximum: 4096 bytes 00:14:43.549 Persistent Memory Region: Not Supported 00:14:43.549 Optional Asynchronous Events Supported 00:14:43.549 Namespace Attribute Notices: Not Supported 00:14:43.549 Firmware Activation Notices: Not Supported 00:14:43.549 ANA Change Notices: Not Supported 00:14:43.549 PLE Aggregate Log Change Notices: Not Supported 00:14:43.549 LBA Status Info Alert Notices: Not Supported 00:14:43.549 EGE Aggregate Log Change Notices: Not Supported 00:14:43.549 Normal NVM Subsystem Shutdown event: Not Supported 00:14:43.549 Zone Descriptor Change Notices: Not Supported 00:14:43.549 Discovery Log Change Notices: Supported 00:14:43.549 Controller Attributes 00:14:43.549 128-bit Host Identifier: Not Supported 00:14:43.549 Non-Operational Permissive Mode: Not Supported 00:14:43.549 NVM Sets: Not Supported 00:14:43.549 Read Recovery Levels: Not Supported 00:14:43.549 Endurance Groups: Not Supported 00:14:43.549 Predictable Latency Mode: Not Supported 00:14:43.549 Traffic Based Keep ALive: Not Supported 00:14:43.549 Namespace Granularity: Not Supported 00:14:43.549 SQ Associations: Not Supported 00:14:43.549 UUID List: Not Supported 00:14:43.549 Multi-Domain Subsystem: Not Supported 00:14:43.549 Fixed Capacity Management: Not Supported 00:14:43.549 Variable Capacity Management: Not Supported 00:14:43.549 Delete Endurance Group: Not Supported 00:14:43.549 Delete NVM Set: Not Supported 00:14:43.549 Extended LBA Formats Supported: Not Supported 00:14:43.549 Flexible Data Placement Supported: Not Supported 00:14:43.549 00:14:43.549 Controller Memory Buffer Support 00:14:43.549 ================================ 00:14:43.549 Supported: No 00:14:43.549 00:14:43.549 Persistent Memory Region Support 00:14:43.549 ================================ 00:14:43.549 Supported: No 00:14:43.549 00:14:43.549 Admin Command Set Attributes 00:14:43.549 ============================ 00:14:43.549 Security Send/Receive: Not Supported 00:14:43.549 Format NVM: Not Supported 00:14:43.549 Firmware Activate/Download: Not Supported 00:14:43.549 Namespace Management: Not Supported 00:14:43.549 Device Self-Test: Not Supported 00:14:43.549 Directives: Not Supported 00:14:43.549 NVMe-MI: Not Supported 00:14:43.549 Virtualization Management: Not Supported 00:14:43.549 Doorbell Buffer Config: Not Supported 00:14:43.549 Get LBA Status Capability: Not Supported 00:14:43.549 Command & Feature Lockdown Capability: Not Supported 00:14:43.549 Abort Command Limit: 1 00:14:43.549 Async Event Request Limit: 4 00:14:43.549 Number of Firmware Slots: N/A 00:14:43.549 Firmware Slot 1 Read-Only: N/A 00:14:43.549 Firmware Activation Without Reset: N/A 00:14:43.549 Multiple Update Detection Support: N/A 00:14:43.549 Firmware Update Granularity: No Information Provided 00:14:43.549 Per-Namespace SMART Log: No 00:14:43.549 Asymmetric Namespace Access Log Page: Not Supported 00:14:43.549 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:43.549 Command Effects Log Page: Not Supported 00:14:43.549 Get Log Page Extended Data: Supported 00:14:43.549 Telemetry Log Pages: Not Supported 00:14:43.549 Persistent Event Log Pages: Not Supported 00:14:43.549 Supported Log Pages Log Page: May Support 00:14:43.549 Commands Supported & Effects Log Page: Not Supported 00:14:43.549 Feature Identifiers & Effects Log Page:May Support 00:14:43.549 NVMe-MI Commands & Effects Log Page: May Support 00:14:43.549 Data Area 4 for Telemetry Log: Not Supported 00:14:43.549 Error Log Page Entries Supported: 128 00:14:43.549 Keep Alive: Not Supported 00:14:43.549 00:14:43.549 NVM Command Set Attributes 00:14:43.549 ========================== 00:14:43.549 Submission Queue Entry Size 00:14:43.549 Max: 1 00:14:43.549 Min: 1 00:14:43.549 Completion Queue Entry Size 00:14:43.549 Max: 1 00:14:43.549 Min: 1 00:14:43.549 Number of Namespaces: 0 00:14:43.549 Compare Command: Not Supported 00:14:43.549 Write Uncorrectable Command: Not Supported 00:14:43.549 Dataset Management Command: Not Supported 00:14:43.549 Write Zeroes Command: Not Supported 00:14:43.549 Set Features Save Field: Not Supported 00:14:43.549 Reservations: Not Supported 00:14:43.549 Timestamp: Not Supported 00:14:43.549 Copy: Not Supported 00:14:43.549 Volatile Write Cache: Not Present 00:14:43.549 Atomic Write Unit (Normal): 1 00:14:43.549 Atomic Write Unit (PFail): 1 00:14:43.549 Atomic Compare & Write Unit: 1 00:14:43.549 Fused Compare & Write: Supported 00:14:43.549 Scatter-Gather List 00:14:43.549 SGL Command Set: Supported 00:14:43.549 SGL Keyed: Supported 00:14:43.549 SGL Bit Bucket Descriptor: Not Supported 00:14:43.549 SGL Metadata Pointer: Not Supported 00:14:43.549 Oversized SGL: Not Supported 00:14:43.549 SGL Metadata Address: Not Supported 00:14:43.549 SGL Offset: Supported 00:14:43.549 Transport SGL Data Block: Not Supported 00:14:43.549 Replay Protected Memory Block: Not Supported 00:14:43.549 00:14:43.549 Firmware Slot Information 00:14:43.549 ========================= 00:14:43.549 Active slot: 0 00:14:43.549 00:14:43.549 00:14:43.549 Error Log 00:14:43.549 ========= 00:14:43.549 00:14:43.549 Active Namespaces 00:14:43.549 ================= 00:14:43.549 Discovery Log Page 00:14:43.549 ================== 00:14:43.549 Generation Counter: 2 00:14:43.549 Number of Records: 2 00:14:43.549 Record Format: 0 00:14:43.549 00:14:43.549 Discovery Log Entry 0 00:14:43.549 ---------------------- 00:14:43.549 Transport Type: 3 (TCP) 00:14:43.549 Address Family: 1 (IPv4) 00:14:43.549 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:43.549 Entry Flags: 00:14:43.549 Duplicate Returned Information: 1 00:14:43.549 Explicit Persistent Connection Support for Discovery: 1 00:14:43.549 Transport Requirements: 00:14:43.549 Secure Channel: Not Required 00:14:43.549 Port ID: 0 (0x0000) 00:14:43.549 Controller ID: 65535 (0xffff) 00:14:43.549 Admin Max SQ Size: 128 00:14:43.549 Transport Service Identifier: 4420 00:14:43.549 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:43.549 Transport Address: 10.0.0.3 00:14:43.549 Discovery Log Entry 1 00:14:43.549 ---------------------- 00:14:43.549 Transport Type: 3 (TCP) 00:14:43.549 Address Family: 1 (IPv4) 00:14:43.549 Subsystem Type: 2 (NVM Subsystem) 00:14:43.549 Entry Flags: 00:14:43.549 Duplicate Returned Information: 0 00:14:43.550 Explicit Persistent Connection Support for Discovery: 0 00:14:43.550 Transport Requirements: 00:14:43.550 Secure Channel: Not Required 00:14:43.550 Port ID: 0 (0x0000) 00:14:43.550 Controller ID: 65535 (0xffff) 00:14:43.550 Admin Max SQ Size: 128 00:14:43.550 Transport Service Identifier: 4420 00:14:43.550 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:43.550 Transport Address: 10.0.0.3 [2024-12-11 08:48:51.196798] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:43.550 [2024-12-11 08:48:51.196813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5740) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.196820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.550 [2024-12-11 08:48:51.196826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e58c0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.196832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.550 [2024-12-11 08:48:51.196837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5a40) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.196843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.550 [2024-12-11 08:48:51.196848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.196853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.550 [2024-12-11 08:48:51.196866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.196871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.196875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.196884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.196907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.196953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.196960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.196964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.196969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.196977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.196981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.196985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.196993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197094] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:43.550 [2024-12-11 08:48:51.197099] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:43.550 [2024-12-11 08:48:51.197110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.550 [2024-12-11 08:48:51.197866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.550 [2024-12-11 08:48:51.197875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.550 [2024-12-11 08:48:51.197883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.550 [2024-12-11 08:48:51.197900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.550 [2024-12-11 08:48:51.197948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.550 [2024-12-11 08:48:51.197955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.550 [2024-12-11 08:48:51.197959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.197963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.551 [2024-12-11 08:48:51.197974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.197979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.197983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.551 [2024-12-11 08:48:51.197991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.551 [2024-12-11 08:48:51.198008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.551 [2024-12-11 08:48:51.198056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.551 [2024-12-11 08:48:51.198063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.551 [2024-12-11 08:48:51.198067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.198071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.551 [2024-12-11 08:48:51.198082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.198087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.198091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.551 [2024-12-11 08:48:51.198099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.551 [2024-12-11 08:48:51.198116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.551 [2024-12-11 08:48:51.202157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.551 [2024-12-11 08:48:51.202179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.551 [2024-12-11 08:48:51.202185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.202190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.551 [2024-12-11 08:48:51.202205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.202210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.202214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981750) 00:14:43.551 [2024-12-11 08:48:51.202224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.551 [2024-12-11 08:48:51.202249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e5bc0, cid 3, qid 0 00:14:43.551 [2024-12-11 08:48:51.202299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.551 [2024-12-11 08:48:51.202306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.551 [2024-12-11 08:48:51.202310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.551 [2024-12-11 08:48:51.202314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e5bc0) on tqpair=0x1981750 00:14:43.551 [2024-12-11 08:48:51.202324] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:14:43.551 00:14:43.551 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:43.551 [2024-12-11 08:48:51.245985] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:43.551 [2024-12-11 08:48:51.246037] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74889 ] 00:14:43.816 [2024-12-11 08:48:51.403543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:43.816 [2024-12-11 08:48:51.403613] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:43.816 [2024-12-11 08:48:51.403620] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:43.816 [2024-12-11 08:48:51.403630] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:43.816 [2024-12-11 08:48:51.403638] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:43.816 [2024-12-11 08:48:51.403913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:43.816 [2024-12-11 08:48:51.403970] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d2c750 0 00:14:43.816 [2024-12-11 08:48:51.420201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:43.816 [2024-12-11 08:48:51.420225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:43.816 [2024-12-11 08:48:51.420247] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:43.816 [2024-12-11 08:48:51.420251] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:43.816 [2024-12-11 08:48:51.420285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.420292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.420296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.816 [2024-12-11 08:48:51.420308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:43.816 [2024-12-11 08:48:51.420338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.816 [2024-12-11 08:48:51.428205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.816 [2024-12-11 08:48:51.428226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.816 [2024-12-11 08:48:51.428247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.816 [2024-12-11 08:48:51.428266] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:43.816 [2024-12-11 08:48:51.428275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:43.816 [2024-12-11 08:48:51.428281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:43.816 [2024-12-11 08:48:51.428299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.816 [2024-12-11 08:48:51.428318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.816 [2024-12-11 08:48:51.428344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.816 [2024-12-11 08:48:51.428400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.816 [2024-12-11 08:48:51.428407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.816 [2024-12-11 08:48:51.428411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.816 [2024-12-11 08:48:51.428424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:43.816 [2024-12-11 08:48:51.428433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:43.816 [2024-12-11 08:48:51.428441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.816 [2024-12-11 08:48:51.428490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.816 [2024-12-11 08:48:51.428510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.816 [2024-12-11 08:48:51.428561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.816 [2024-12-11 08:48:51.428569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.816 [2024-12-11 08:48:51.428573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.816 [2024-12-11 08:48:51.428583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:43.816 [2024-12-11 08:48:51.428593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:43.816 [2024-12-11 08:48:51.428600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.816 [2024-12-11 08:48:51.428617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.816 [2024-12-11 08:48:51.428636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.816 [2024-12-11 08:48:51.428682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.816 [2024-12-11 08:48:51.428689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.816 [2024-12-11 08:48:51.428693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.816 [2024-12-11 08:48:51.428704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:43.816 [2024-12-11 08:48:51.428721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.816 [2024-12-11 08:48:51.428738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.816 [2024-12-11 08:48:51.428756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.816 [2024-12-11 08:48:51.428799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.816 [2024-12-11 08:48:51.428806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.816 [2024-12-11 08:48:51.428810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.816 [2024-12-11 08:48:51.428820] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:43.816 [2024-12-11 08:48:51.428825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:43.816 [2024-12-11 08:48:51.428834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:43.816 [2024-12-11 08:48:51.428944] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:43.816 [2024-12-11 08:48:51.428951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:43.816 [2024-12-11 08:48:51.428960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.816 [2024-12-11 08:48:51.428968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.816 [2024-12-11 08:48:51.428976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.816 [2024-12-11 08:48:51.428995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.816 [2024-12-11 08:48:51.429038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.817 [2024-12-11 08:48:51.429045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.817 [2024-12-11 08:48:51.429049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.817 [2024-12-11 08:48:51.429059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:43.817 [2024-12-11 08:48:51.429070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.817 [2024-12-11 08:48:51.429104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.817 [2024-12-11 08:48:51.429168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.817 [2024-12-11 08:48:51.429176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.817 [2024-12-11 08:48:51.429180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.817 [2024-12-11 08:48:51.429191] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:43.817 [2024-12-11 08:48:51.429210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:43.817 [2024-12-11 08:48:51.429231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.817 [2024-12-11 08:48:51.429284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.817 [2024-12-11 08:48:51.429382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.817 [2024-12-11 08:48:51.429390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.817 [2024-12-11 08:48:51.429394] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429399] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=4096, cccid=0 00:14:43.817 [2024-12-11 08:48:51.429404] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d90740) on tqpair(0x1d2c750): expected_datao=0, payload_size=4096 00:14:43.817 [2024-12-11 08:48:51.429409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429417] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429422] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.817 [2024-12-11 08:48:51.429438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.817 [2024-12-11 08:48:51.429442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.817 [2024-12-11 08:48:51.429456] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:43.817 [2024-12-11 08:48:51.429462] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:43.817 [2024-12-11 08:48:51.429467] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:43.817 [2024-12-11 08:48:51.429473] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:43.817 [2024-12-11 08:48:51.429478] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:43.817 [2024-12-11 08:48:51.429484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:43.817 [2024-12-11 08:48:51.429554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.817 [2024-12-11 08:48:51.429606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.817 [2024-12-11 08:48:51.429613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.817 [2024-12-11 08:48:51.429617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.817 [2024-12-11 08:48:51.429629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.817 [2024-12-11 08:48:51.429652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.817 [2024-12-11 08:48:51.429674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.817 [2024-12-11 08:48:51.429695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.817 [2024-12-11 08:48:51.429715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.817 [2024-12-11 08:48:51.429770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90740, cid 0, qid 0 00:14:43.817 [2024-12-11 08:48:51.429778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d908c0, cid 1, qid 0 00:14:43.817 [2024-12-11 08:48:51.429783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90a40, cid 2, qid 0 00:14:43.817 [2024-12-11 08:48:51.429788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.817 [2024-12-11 08:48:51.429794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.817 [2024-12-11 08:48:51.429874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.817 [2024-12-11 08:48:51.429881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.817 [2024-12-11 08:48:51.429885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.817 [2024-12-11 08:48:51.429895] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:43.817 [2024-12-11 08:48:51.429902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.429930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.429938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.429946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:43.817 [2024-12-11 08:48:51.429964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.817 [2024-12-11 08:48:51.430020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.817 [2024-12-11 08:48:51.430027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.817 [2024-12-11 08:48:51.430031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.430035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.817 [2024-12-11 08:48:51.430098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.430109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:43.817 [2024-12-11 08:48:51.430118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.817 [2024-12-11 08:48:51.430122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.817 [2024-12-11 08:48:51.430130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.817 [2024-12-11 08:48:51.430160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.817 [2024-12-11 08:48:51.430230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.817 [2024-12-11 08:48:51.430238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.818 [2024-12-11 08:48:51.430242] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430246] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=4096, cccid=4 00:14:43.818 [2024-12-11 08:48:51.430251] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d90d40) on tqpair(0x1d2c750): expected_datao=0, payload_size=4096 00:14:43.818 [2024-12-11 08:48:51.430256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430264] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430268] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.430283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.430287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.430311] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:43.818 [2024-12-11 08:48:51.430322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.430354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.430375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.818 [2024-12-11 08:48:51.430484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.818 [2024-12-11 08:48:51.430497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.818 [2024-12-11 08:48:51.430502] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430506] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=4096, cccid=4 00:14:43.818 [2024-12-11 08:48:51.430511] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d90d40) on tqpair(0x1d2c750): expected_datao=0, payload_size=4096 00:14:43.818 [2024-12-11 08:48:51.430516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430524] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.430544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.430548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.430567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.430601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.430622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.818 [2024-12-11 08:48:51.430686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.818 [2024-12-11 08:48:51.430703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.818 [2024-12-11 08:48:51.430708] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430713] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=4096, cccid=4 00:14:43.818 [2024-12-11 08:48:51.430718] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d90d40) on tqpair(0x1d2c750): expected_datao=0, payload_size=4096 00:14:43.818 [2024-12-11 08:48:51.430723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.430751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.430755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.430769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430820] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:43.818 [2024-12-11 08:48:51.430825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:43.818 [2024-12-11 08:48:51.430831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:43.818 [2024-12-11 08:48:51.430846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.430859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.430866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.430882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.818 [2024-12-11 08:48:51.430906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.818 [2024-12-11 08:48:51.430914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90ec0, cid 5, qid 0 00:14:43.818 [2024-12-11 08:48:51.430973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.430981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.430985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.430989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.430997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.431003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.431007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90ec0) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.431023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.431035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.431082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90ec0, cid 5, qid 0 00:14:43.818 [2024-12-11 08:48:51.431151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.431160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.431164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90ec0) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.431181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.431194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.431214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90ec0, cid 5, qid 0 00:14:43.818 [2024-12-11 08:48:51.431292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.431302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.431306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90ec0) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.431323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.431336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.431357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90ec0, cid 5, qid 0 00:14:43.818 [2024-12-11 08:48:51.431405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.818 [2024-12-11 08:48:51.431412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.818 [2024-12-11 08:48:51.431416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90ec0) on tqpair=0x1d2c750 00:14:43.818 [2024-12-11 08:48:51.431442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.431456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.818 [2024-12-11 08:48:51.431465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.818 [2024-12-11 08:48:51.431469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2c750) 00:14:43.818 [2024-12-11 08:48:51.431476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.819 [2024-12-11 08:48:51.431484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d2c750) 00:14:43.819 [2024-12-11 08:48:51.431496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.819 [2024-12-11 08:48:51.431519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d2c750) 00:14:43.819 [2024-12-11 08:48:51.431530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.819 [2024-12-11 08:48:51.431549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90ec0, cid 5, qid 0 00:14:43.819 [2024-12-11 08:48:51.431557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90d40, cid 4, qid 0 00:14:43.819 [2024-12-11 08:48:51.431562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d91040, cid 6, qid 0 00:14:43.819 [2024-12-11 08:48:51.431567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d911c0, cid 7, qid 0 00:14:43.819 [2024-12-11 08:48:51.431707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.819 [2024-12-11 08:48:51.431715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.819 [2024-12-11 08:48:51.431719] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431723] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=8192, cccid=5 00:14:43.819 [2024-12-11 08:48:51.431728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d90ec0) on tqpair(0x1d2c750): expected_datao=0, payload_size=8192 00:14:43.819 [2024-12-11 08:48:51.431733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431750] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431755] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.819 [2024-12-11 08:48:51.431767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.819 [2024-12-11 08:48:51.431771] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431775] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=512, cccid=4 00:14:43.819 [2024-12-11 08:48:51.431780] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d90d40) on tqpair(0x1d2c750): expected_datao=0, payload_size=512 00:14:43.819 [2024-12-11 08:48:51.431785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431792] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.819 [2024-12-11 08:48:51.431808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.819 [2024-12-11 08:48:51.431812] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431816] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=512, cccid=6 00:14:43.819 [2024-12-11 08:48:51.431821] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d91040) on tqpair(0x1d2c750): expected_datao=0, payload_size=512 00:14:43.819 [2024-12-11 08:48:51.431826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431832] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431836] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.819 [2024-12-11 08:48:51.431848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.819 [2024-12-11 08:48:51.431852] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431857] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2c750): datao=0, datal=4096, cccid=7 00:14:43.819 [2024-12-11 08:48:51.431861] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d911c0) on tqpair(0x1d2c750): expected_datao=0, payload_size=4096 00:14:43.819 [2024-12-11 08:48:51.431866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431877] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.819 [2024-12-11 08:48:51.431892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.819 [2024-12-11 08:48:51.431896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90ec0) on tqpair=0x1d2c750 00:14:43.819 [2024-12-11 08:48:51.431916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.819 [2024-12-11 08:48:51.431923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.819 [2024-12-11 08:48:51.431927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90d40) on tqpair=0x1d2c750 00:14:43.819 [2024-12-11 08:48:51.431944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.819 [2024-12-11 08:48:51.431951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.819 [2024-12-11 08:48:51.431955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d91040) on tqpair=0x1d2c750 00:14:43.819 [2024-12-11 08:48:51.431967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.819 [2024-12-11 08:48:51.431973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.819 [2024-12-11 08:48:51.431977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.819 [2024-12-11 08:48:51.431982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d911c0) on tqpair=0x1d2c750 00:14:43.819 ===================================================== 00:14:43.819 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.819 ===================================================== 00:14:43.819 Controller Capabilities/Features 00:14:43.819 ================================ 00:14:43.819 Vendor ID: 8086 00:14:43.819 Subsystem Vendor ID: 8086 00:14:43.819 Serial Number: SPDK00000000000001 00:14:43.819 Model Number: SPDK bdev Controller 00:14:43.819 Firmware Version: 25.01 00:14:43.819 Recommended Arb Burst: 6 00:14:43.819 IEEE OUI Identifier: e4 d2 5c 00:14:43.819 Multi-path I/O 00:14:43.819 May have multiple subsystem ports: Yes 00:14:43.819 May have multiple controllers: Yes 00:14:43.819 Associated with SR-IOV VF: No 00:14:43.819 Max Data Transfer Size: 131072 00:14:43.819 Max Number of Namespaces: 32 00:14:43.819 Max Number of I/O Queues: 127 00:14:43.819 NVMe Specification Version (VS): 1.3 00:14:43.819 NVMe Specification Version (Identify): 1.3 00:14:43.819 Maximum Queue Entries: 128 00:14:43.819 Contiguous Queues Required: Yes 00:14:43.819 Arbitration Mechanisms Supported 00:14:43.819 Weighted Round Robin: Not Supported 00:14:43.819 Vendor Specific: Not Supported 00:14:43.819 Reset Timeout: 15000 ms 00:14:43.819 Doorbell Stride: 4 bytes 00:14:43.819 NVM Subsystem Reset: Not Supported 00:14:43.819 Command Sets Supported 00:14:43.819 NVM Command Set: Supported 00:14:43.819 Boot Partition: Not Supported 00:14:43.819 Memory Page Size Minimum: 4096 bytes 00:14:43.819 Memory Page Size Maximum: 4096 bytes 00:14:43.819 Persistent Memory Region: Not Supported 00:14:43.819 Optional Asynchronous Events Supported 00:14:43.819 Namespace Attribute Notices: Supported 00:14:43.819 Firmware Activation Notices: Not Supported 00:14:43.819 ANA Change Notices: Not Supported 00:14:43.819 PLE Aggregate Log Change Notices: Not Supported 00:14:43.819 LBA Status Info Alert Notices: Not Supported 00:14:43.819 EGE Aggregate Log Change Notices: Not Supported 00:14:43.819 Normal NVM Subsystem Shutdown event: Not Supported 00:14:43.819 Zone Descriptor Change Notices: Not Supported 00:14:43.819 Discovery Log Change Notices: Not Supported 00:14:43.819 Controller Attributes 00:14:43.819 128-bit Host Identifier: Supported 00:14:43.819 Non-Operational Permissive Mode: Not Supported 00:14:43.819 NVM Sets: Not Supported 00:14:43.819 Read Recovery Levels: Not Supported 00:14:43.819 Endurance Groups: Not Supported 00:14:43.819 Predictable Latency Mode: Not Supported 00:14:43.819 Traffic Based Keep ALive: Not Supported 00:14:43.819 Namespace Granularity: Not Supported 00:14:43.819 SQ Associations: Not Supported 00:14:43.819 UUID List: Not Supported 00:14:43.819 Multi-Domain Subsystem: Not Supported 00:14:43.819 Fixed Capacity Management: Not Supported 00:14:43.819 Variable Capacity Management: Not Supported 00:14:43.819 Delete Endurance Group: Not Supported 00:14:43.819 Delete NVM Set: Not Supported 00:14:43.819 Extended LBA Formats Supported: Not Supported 00:14:43.819 Flexible Data Placement Supported: Not Supported 00:14:43.819 00:14:43.819 Controller Memory Buffer Support 00:14:43.819 ================================ 00:14:43.819 Supported: No 00:14:43.819 00:14:43.819 Persistent Memory Region Support 00:14:43.819 ================================ 00:14:43.819 Supported: No 00:14:43.819 00:14:43.819 Admin Command Set Attributes 00:14:43.819 ============================ 00:14:43.819 Security Send/Receive: Not Supported 00:14:43.819 Format NVM: Not Supported 00:14:43.819 Firmware Activate/Download: Not Supported 00:14:43.819 Namespace Management: Not Supported 00:14:43.819 Device Self-Test: Not Supported 00:14:43.819 Directives: Not Supported 00:14:43.819 NVMe-MI: Not Supported 00:14:43.819 Virtualization Management: Not Supported 00:14:43.819 Doorbell Buffer Config: Not Supported 00:14:43.819 Get LBA Status Capability: Not Supported 00:14:43.819 Command & Feature Lockdown Capability: Not Supported 00:14:43.819 Abort Command Limit: 4 00:14:43.819 Async Event Request Limit: 4 00:14:43.819 Number of Firmware Slots: N/A 00:14:43.819 Firmware Slot 1 Read-Only: N/A 00:14:43.819 Firmware Activation Without Reset: N/A 00:14:43.819 Multiple Update Detection Support: N/A 00:14:43.819 Firmware Update Granularity: No Information Provided 00:14:43.819 Per-Namespace SMART Log: No 00:14:43.819 Asymmetric Namespace Access Log Page: Not Supported 00:14:43.820 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:43.820 Command Effects Log Page: Supported 00:14:43.820 Get Log Page Extended Data: Supported 00:14:43.820 Telemetry Log Pages: Not Supported 00:14:43.820 Persistent Event Log Pages: Not Supported 00:14:43.820 Supported Log Pages Log Page: May Support 00:14:43.820 Commands Supported & Effects Log Page: Not Supported 00:14:43.820 Feature Identifiers & Effects Log Page:May Support 00:14:43.820 NVMe-MI Commands & Effects Log Page: May Support 00:14:43.820 Data Area 4 for Telemetry Log: Not Supported 00:14:43.820 Error Log Page Entries Supported: 128 00:14:43.820 Keep Alive: Supported 00:14:43.820 Keep Alive Granularity: 10000 ms 00:14:43.820 00:14:43.820 NVM Command Set Attributes 00:14:43.820 ========================== 00:14:43.820 Submission Queue Entry Size 00:14:43.820 Max: 64 00:14:43.820 Min: 64 00:14:43.820 Completion Queue Entry Size 00:14:43.820 Max: 16 00:14:43.820 Min: 16 00:14:43.820 Number of Namespaces: 32 00:14:43.820 Compare Command: Supported 00:14:43.820 Write Uncorrectable Command: Not Supported 00:14:43.820 Dataset Management Command: Supported 00:14:43.820 Write Zeroes Command: Supported 00:14:43.820 Set Features Save Field: Not Supported 00:14:43.820 Reservations: Supported 00:14:43.820 Timestamp: Not Supported 00:14:43.820 Copy: Supported 00:14:43.820 Volatile Write Cache: Present 00:14:43.820 Atomic Write Unit (Normal): 1 00:14:43.820 Atomic Write Unit (PFail): 1 00:14:43.820 Atomic Compare & Write Unit: 1 00:14:43.820 Fused Compare & Write: Supported 00:14:43.820 Scatter-Gather List 00:14:43.820 SGL Command Set: Supported 00:14:43.820 SGL Keyed: Supported 00:14:43.820 SGL Bit Bucket Descriptor: Not Supported 00:14:43.820 SGL Metadata Pointer: Not Supported 00:14:43.820 Oversized SGL: Not Supported 00:14:43.820 SGL Metadata Address: Not Supported 00:14:43.820 SGL Offset: Supported 00:14:43.820 Transport SGL Data Block: Not Supported 00:14:43.820 Replay Protected Memory Block: Not Supported 00:14:43.820 00:14:43.820 Firmware Slot Information 00:14:43.820 ========================= 00:14:43.820 Active slot: 1 00:14:43.820 Slot 1 Firmware Revision: 25.01 00:14:43.820 00:14:43.820 00:14:43.820 Commands Supported and Effects 00:14:43.820 ============================== 00:14:43.820 Admin Commands 00:14:43.820 -------------- 00:14:43.820 Get Log Page (02h): Supported 00:14:43.820 Identify (06h): Supported 00:14:43.820 Abort (08h): Supported 00:14:43.820 Set Features (09h): Supported 00:14:43.820 Get Features (0Ah): Supported 00:14:43.820 Asynchronous Event Request (0Ch): Supported 00:14:43.820 Keep Alive (18h): Supported 00:14:43.820 I/O Commands 00:14:43.820 ------------ 00:14:43.820 Flush (00h): Supported LBA-Change 00:14:43.820 Write (01h): Supported LBA-Change 00:14:43.820 Read (02h): Supported 00:14:43.820 Compare (05h): Supported 00:14:43.820 Write Zeroes (08h): Supported LBA-Change 00:14:43.820 Dataset Management (09h): Supported LBA-Change 00:14:43.820 Copy (19h): Supported LBA-Change 00:14:43.820 00:14:43.820 Error Log 00:14:43.820 ========= 00:14:43.820 00:14:43.820 Arbitration 00:14:43.820 =========== 00:14:43.820 Arbitration Burst: 1 00:14:43.820 00:14:43.820 Power Management 00:14:43.820 ================ 00:14:43.820 Number of Power States: 1 00:14:43.820 Current Power State: Power State #0 00:14:43.820 Power State #0: 00:14:43.820 Max Power: 0.00 W 00:14:43.820 Non-Operational State: Operational 00:14:43.820 Entry Latency: Not Reported 00:14:43.820 Exit Latency: Not Reported 00:14:43.820 Relative Read Throughput: 0 00:14:43.820 Relative Read Latency: 0 00:14:43.820 Relative Write Throughput: 0 00:14:43.820 Relative Write Latency: 0 00:14:43.820 Idle Power: Not Reported 00:14:43.820 Active Power: Not Reported 00:14:43.820 Non-Operational Permissive Mode: Not Supported 00:14:43.820 00:14:43.820 Health Information 00:14:43.820 ================== 00:14:43.820 Critical Warnings: 00:14:43.820 Available Spare Space: OK 00:14:43.820 Temperature: OK 00:14:43.820 Device Reliability: OK 00:14:43.820 Read Only: No 00:14:43.820 Volatile Memory Backup: OK 00:14:43.820 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:43.820 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:43.820 Available Spare: 0% 00:14:43.820 Available Spare Threshold: 0% 00:14:43.820 Life Percentage Used:[2024-12-11 08:48:51.432083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.432090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d2c750) 00:14:43.820 [2024-12-11 08:48:51.432098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.820 [2024-12-11 08:48:51.432120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d911c0, cid 7, qid 0 00:14:43.820 [2024-12-11 08:48:51.436197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.820 [2024-12-11 08:48:51.436217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.820 [2024-12-11 08:48:51.436239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.436244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d911c0) on tqpair=0x1d2c750 00:14:43.820 [2024-12-11 08:48:51.436287] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:43.820 [2024-12-11 08:48:51.436300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90740) on tqpair=0x1d2c750 00:14:43.820 [2024-12-11 08:48:51.436307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.820 [2024-12-11 08:48:51.436313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d908c0) on tqpair=0x1d2c750 00:14:43.820 [2024-12-11 08:48:51.436318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.820 [2024-12-11 08:48:51.436324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90a40) on tqpair=0x1d2c750 00:14:43.820 [2024-12-11 08:48:51.436329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.820 [2024-12-11 08:48:51.436334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.820 [2024-12-11 08:48:51.436339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.820 [2024-12-11 08:48:51.436349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.436354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.436358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.820 [2024-12-11 08:48:51.436367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.820 [2024-12-11 08:48:51.436393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.820 [2024-12-11 08:48:51.436465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.820 [2024-12-11 08:48:51.436473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.820 [2024-12-11 08:48:51.436477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.436481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.820 [2024-12-11 08:48:51.436489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.436494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.820 [2024-12-11 08:48:51.436498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.820 [2024-12-11 08:48:51.436506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.820 [2024-12-11 08:48:51.436528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.820 [2024-12-11 08:48:51.436592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.820 [2024-12-11 08:48:51.436600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.820 [2024-12-11 08:48:51.436603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.436614] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:43.821 [2024-12-11 08:48:51.436619] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:43.821 [2024-12-11 08:48:51.436630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.436646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.436664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.436708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.436720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.436724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.436740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.436757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.436774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.436818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.436826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.436830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.436845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.436862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.436879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.436924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.436931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.436935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.436950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.436959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.436966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.436983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.437068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.437084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.437201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.437221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.437317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.437335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.437426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.437443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.437531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.437563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.821 [2024-12-11 08:48:51.437647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.821 [2024-12-11 08:48:51.437664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.821 [2024-12-11 08:48:51.437709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.821 [2024-12-11 08:48:51.437716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.821 [2024-12-11 08:48:51.437720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.821 [2024-12-11 08:48:51.437735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.821 [2024-12-11 08:48:51.437740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.437752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.437769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.437814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.437821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.437825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.437840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.437856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.437874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.437918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.437925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.437929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.437944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.437954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.437961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.437978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.438927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.438934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.438938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.438953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.438962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.822 [2024-12-11 08:48:51.438970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.822 [2024-12-11 08:48:51.438986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.822 [2024-12-11 08:48:51.439031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.822 [2024-12-11 08:48:51.439063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.822 [2024-12-11 08:48:51.439068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.439073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.822 [2024-12-11 08:48:51.439085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.822 [2024-12-11 08:48:51.439090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.439895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.439902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.439906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.439922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.439931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.439939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.439956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.440000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.440012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.440016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.440021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.440032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.440037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.440041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.440049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.440067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.440111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.440123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.440127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.444173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.444219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.444226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.444231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2c750) 00:14:43.823 [2024-12-11 08:48:51.444239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.823 [2024-12-11 08:48:51.444264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d90bc0, cid 3, qid 0 00:14:43.823 [2024-12-11 08:48:51.444339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.823 [2024-12-11 08:48:51.444347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.823 [2024-12-11 08:48:51.444351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.823 [2024-12-11 08:48:51.444356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d90bc0) on tqpair=0x1d2c750 00:14:43.823 [2024-12-11 08:48:51.444365] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:14:43.823 0% 00:14:43.823 Data Units Read: 0 00:14:43.823 Data Units Written: 0 00:14:43.823 Host Read Commands: 0 00:14:43.823 Host Write Commands: 0 00:14:43.823 Controller Busy Time: 0 minutes 00:14:43.823 Power Cycles: 0 00:14:43.823 Power On Hours: 0 hours 00:14:43.823 Unsafe Shutdowns: 0 00:14:43.823 Unrecoverable Media Errors: 0 00:14:43.823 Lifetime Error Log Entries: 0 00:14:43.823 Warning Temperature Time: 0 minutes 00:14:43.823 Critical Temperature Time: 0 minutes 00:14:43.823 00:14:43.823 Number of Queues 00:14:43.823 ================ 00:14:43.823 Number of I/O Submission Queues: 127 00:14:43.823 Number of I/O Completion Queues: 127 00:14:43.823 00:14:43.823 Active Namespaces 00:14:43.823 ================= 00:14:43.823 Namespace ID:1 00:14:43.823 Error Recovery Timeout: Unlimited 00:14:43.823 Command Set Identifier: NVM (00h) 00:14:43.823 Deallocate: Supported 00:14:43.823 Deallocated/Unwritten Error: Not Supported 00:14:43.823 Deallocated Read Value: Unknown 00:14:43.823 Deallocate in Write Zeroes: Not Supported 00:14:43.824 Deallocated Guard Field: 0xFFFF 00:14:43.824 Flush: Supported 00:14:43.824 Reservation: Supported 00:14:43.824 Namespace Sharing Capabilities: Multiple Controllers 00:14:43.824 Size (in LBAs): 131072 (0GiB) 00:14:43.824 Capacity (in LBAs): 131072 (0GiB) 00:14:43.824 Utilization (in LBAs): 131072 (0GiB) 00:14:43.824 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:43.824 EUI64: ABCDEF0123456789 00:14:43.824 UUID: fd609558-3d23-4e70-a1a6-59251796e1a1 00:14:43.824 Thin Provisioning: Not Supported 00:14:43.824 Per-NS Atomic Units: Yes 00:14:43.824 Atomic Boundary Size (Normal): 0 00:14:43.824 Atomic Boundary Size (PFail): 0 00:14:43.824 Atomic Boundary Offset: 0 00:14:43.824 Maximum Single Source Range Length: 65535 00:14:43.824 Maximum Copy Length: 65535 00:14:43.824 Maximum Source Range Count: 1 00:14:43.824 NGUID/EUI64 Never Reused: No 00:14:43.824 Namespace Write Protected: No 00:14:43.824 Number of LBA Formats: 1 00:14:43.824 Current LBA Format: LBA Format #00 00:14:43.824 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:43.824 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:43.824 rmmod nvme_tcp 00:14:43.824 rmmod nvme_fabrics 00:14:43.824 rmmod nvme_keyring 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74860 ']' 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74860 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74860 ']' 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74860 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.824 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74860 00:14:44.117 killing process with pid 74860 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74860' 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74860 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74860 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:44.117 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:44.118 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:44.377 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:44.377 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:44.377 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.377 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.377 08:48:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:44.377 ************************************ 00:14:44.377 END TEST nvmf_identify 00:14:44.377 ************************************ 00:14:44.377 00:14:44.377 real 0m2.185s 00:14:44.377 user 0m4.337s 00:14:44.377 sys 0m0.702s 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:44.377 ************************************ 00:14:44.377 START TEST nvmf_perf 00:14:44.377 ************************************ 00:14:44.377 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:44.636 * Looking for test storage... 00:14:44.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:44.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.637 --rc genhtml_branch_coverage=1 00:14:44.637 --rc genhtml_function_coverage=1 00:14:44.637 --rc genhtml_legend=1 00:14:44.637 --rc geninfo_all_blocks=1 00:14:44.637 --rc geninfo_unexecuted_blocks=1 00:14:44.637 00:14:44.637 ' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:44.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.637 --rc genhtml_branch_coverage=1 00:14:44.637 --rc genhtml_function_coverage=1 00:14:44.637 --rc genhtml_legend=1 00:14:44.637 --rc geninfo_all_blocks=1 00:14:44.637 --rc geninfo_unexecuted_blocks=1 00:14:44.637 00:14:44.637 ' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:44.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.637 --rc genhtml_branch_coverage=1 00:14:44.637 --rc genhtml_function_coverage=1 00:14:44.637 --rc genhtml_legend=1 00:14:44.637 --rc geninfo_all_blocks=1 00:14:44.637 --rc geninfo_unexecuted_blocks=1 00:14:44.637 00:14:44.637 ' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:44.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.637 --rc genhtml_branch_coverage=1 00:14:44.637 --rc genhtml_function_coverage=1 00:14:44.637 --rc genhtml_legend=1 00:14:44.637 --rc geninfo_all_blocks=1 00:14:44.637 --rc geninfo_unexecuted_blocks=1 00:14:44.637 00:14:44.637 ' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.637 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:44.638 Cannot find device "nvmf_init_br" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:44.638 Cannot find device "nvmf_init_br2" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:44.638 Cannot find device "nvmf_tgt_br" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.638 Cannot find device "nvmf_tgt_br2" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:44.638 Cannot find device "nvmf_init_br" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:44.638 Cannot find device "nvmf_init_br2" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:44.638 Cannot find device "nvmf_tgt_br" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:44.638 Cannot find device "nvmf_tgt_br2" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:44.638 Cannot find device "nvmf_br" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:44.638 Cannot find device "nvmf_init_if" 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:44.638 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:44.897 Cannot find device "nvmf_init_if2" 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.897 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:45.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:14:45.157 00:14:45.157 --- 10.0.0.3 ping statistics --- 00:14:45.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.157 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:45.157 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:45.157 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:45.157 00:14:45.157 --- 10.0.0.4 ping statistics --- 00:14:45.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.157 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:45.157 00:14:45.157 --- 10.0.0.1 ping statistics --- 00:14:45.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.157 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:45.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:45.157 00:14:45.157 --- 10.0.0.2 ping statistics --- 00:14:45.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.157 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=75108 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 75108 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 75108 ']' 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.157 08:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:45.157 [2024-12-11 08:48:52.788591] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:45.157 [2024-12-11 08:48:52.788689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.416 [2024-12-11 08:48:52.933034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.416 [2024-12-11 08:48:52.964520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.416 [2024-12-11 08:48:52.964820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.416 [2024-12-11 08:48:52.964967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.416 [2024-12-11 08:48:52.965180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.416 [2024-12-11 08:48:52.965227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.416 [2024-12-11 08:48:52.966121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.416 [2024-12-11 08:48:52.966217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.416 [2024-12-11 08:48:52.966748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.416 [2024-12-11 08:48:52.966756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.416 [2024-12-11 08:48:52.995922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:46.353 08:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:46.610 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:46.610 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:46.868 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:46.868 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:47.127 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:47.127 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:47.127 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:47.127 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:47.127 08:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:47.386 [2024-12-11 08:48:55.121682] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.386 08:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:47.645 08:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:47.645 08:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:47.910 08:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:47.910 08:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:48.478 08:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:48.478 [2024-12-11 08:48:56.171001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:48.478 08:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:48.737 08:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:48.737 08:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:48.737 08:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:48.737 08:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:50.115 Initializing NVMe Controllers 00:14:50.115 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.115 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:50.115 Initialization complete. Launching workers. 00:14:50.115 ======================================================== 00:14:50.115 Latency(us) 00:14:50.115 Device Information : IOPS MiB/s Average min max 00:14:50.115 PCIE (0000:00:10.0) NSID 1 from core 0: 23325.83 91.12 1371.79 317.32 8868.61 00:14:50.115 ======================================================== 00:14:50.115 Total : 23325.83 91.12 1371.79 317.32 8868.61 00:14:50.115 00:14:50.115 08:48:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:51.493 Initializing NVMe Controllers 00:14:51.493 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.493 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.493 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:51.493 Initialization complete. Launching workers. 00:14:51.493 ======================================================== 00:14:51.493 Latency(us) 00:14:51.493 Device Information : IOPS MiB/s Average min max 00:14:51.493 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3710.43 14.49 269.18 99.04 4268.41 00:14:51.493 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.75 0.49 8079.65 4952.64 12012.13 00:14:51.493 ======================================================== 00:14:51.493 Total : 3835.18 14.98 523.23 99.04 12012.13 00:14:51.493 00:14:51.493 08:48:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:52.873 Initializing NVMe Controllers 00:14:52.873 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.873 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.873 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:52.873 Initialization complete. Launching workers. 00:14:52.873 ======================================================== 00:14:52.873 Latency(us) 00:14:52.873 Device Information : IOPS MiB/s Average min max 00:14:52.873 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8668.41 33.86 3691.79 761.92 7573.68 00:14:52.873 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3996.04 15.61 8009.54 5823.31 11943.84 00:14:52.873 ======================================================== 00:14:52.873 Total : 12664.45 49.47 5054.18 761.92 11943.84 00:14:52.873 00:14:52.873 08:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:52.873 08:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:55.409 Initializing NVMe Controllers 00:14:55.409 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.409 Controller IO queue size 128, less than required. 00:14:55.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.409 Controller IO queue size 128, less than required. 00:14:55.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.409 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:55.409 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:55.409 Initialization complete. Launching workers. 00:14:55.409 ======================================================== 00:14:55.409 Latency(us) 00:14:55.409 Device Information : IOPS MiB/s Average min max 00:14:55.409 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1736.78 434.20 74754.71 41867.99 162714.70 00:14:55.409 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 661.46 165.37 200838.23 53577.84 348471.05 00:14:55.409 ======================================================== 00:14:55.409 Total : 2398.24 599.56 109530.04 41867.99 348471.05 00:14:55.409 00:14:55.409 08:49:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:55.409 Initializing NVMe Controllers 00:14:55.409 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.409 Controller IO queue size 128, less than required. 00:14:55.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.409 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:55.409 Controller IO queue size 128, less than required. 00:14:55.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.409 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:55.409 WARNING: Some requested NVMe devices were skipped 00:14:55.409 No valid NVMe controllers or AIO or URING devices found 00:14:55.727 08:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:58.259 Initializing NVMe Controllers 00:14:58.259 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:58.259 Controller IO queue size 128, less than required. 00:14:58.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:58.259 Controller IO queue size 128, less than required. 00:14:58.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:58.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:58.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:58.259 Initialization complete. Launching workers. 00:14:58.259 00:14:58.259 ==================== 00:14:58.259 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:58.259 TCP transport: 00:14:58.259 polls: 9923 00:14:58.259 idle_polls: 5861 00:14:58.259 sock_completions: 4062 00:14:58.259 nvme_completions: 6469 00:14:58.259 submitted_requests: 9572 00:14:58.259 queued_requests: 1 00:14:58.259 00:14:58.259 ==================== 00:14:58.259 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:58.259 TCP transport: 00:14:58.259 polls: 12660 00:14:58.259 idle_polls: 8812 00:14:58.259 sock_completions: 3848 00:14:58.259 nvme_completions: 6485 00:14:58.259 submitted_requests: 9692 00:14:58.259 queued_requests: 1 00:14:58.259 ======================================================== 00:14:58.259 Latency(us) 00:14:58.259 Device Information : IOPS MiB/s Average min max 00:14:58.259 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1614.33 403.58 80525.31 47449.23 135764.67 00:14:58.259 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1618.32 404.58 79812.96 26867.56 156217.84 00:14:58.259 ======================================================== 00:14:58.259 Total : 3232.65 808.16 80168.70 26867.56 156217.84 00:14:58.259 00:14:58.259 08:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:58.259 08:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.517 rmmod nvme_tcp 00:14:58.517 rmmod nvme_fabrics 00:14:58.517 rmmod nvme_keyring 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 75108 ']' 00:14:58.517 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 75108 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 75108 ']' 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 75108 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75108 00:14:58.518 killing process with pid 75108 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75108' 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 75108 00:14:58.518 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 75108 00:14:58.776 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:58.776 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:58.776 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:58.776 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:59.035 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:59.035 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.035 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:59.036 ************************************ 00:14:59.036 END TEST nvmf_perf 00:14:59.036 ************************************ 00:14:59.036 00:14:59.036 real 0m14.688s 00:14:59.036 user 0m53.703s 00:14:59.036 sys 0m3.926s 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.036 08:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.296 ************************************ 00:14:59.296 START TEST nvmf_fio_host 00:14:59.296 ************************************ 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:59.296 * Looking for test storage... 00:14:59.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.296 08:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:59.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.296 --rc genhtml_branch_coverage=1 00:14:59.296 --rc genhtml_function_coverage=1 00:14:59.296 --rc genhtml_legend=1 00:14:59.296 --rc geninfo_all_blocks=1 00:14:59.296 --rc geninfo_unexecuted_blocks=1 00:14:59.296 00:14:59.296 ' 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:59.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.296 --rc genhtml_branch_coverage=1 00:14:59.296 --rc genhtml_function_coverage=1 00:14:59.296 --rc genhtml_legend=1 00:14:59.296 --rc geninfo_all_blocks=1 00:14:59.296 --rc geninfo_unexecuted_blocks=1 00:14:59.296 00:14:59.296 ' 00:14:59.296 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:59.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.296 --rc genhtml_branch_coverage=1 00:14:59.297 --rc genhtml_function_coverage=1 00:14:59.297 --rc genhtml_legend=1 00:14:59.297 --rc geninfo_all_blocks=1 00:14:59.297 --rc geninfo_unexecuted_blocks=1 00:14:59.297 00:14:59.297 ' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:59.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.297 --rc genhtml_branch_coverage=1 00:14:59.297 --rc genhtml_function_coverage=1 00:14:59.297 --rc genhtml_legend=1 00:14:59.297 --rc geninfo_all_blocks=1 00:14:59.297 --rc geninfo_unexecuted_blocks=1 00:14:59.297 00:14:59.297 ' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.297 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:59.297 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:59.298 Cannot find device "nvmf_init_br" 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:59.298 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:59.556 Cannot find device "nvmf_init_br2" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:59.556 Cannot find device "nvmf_tgt_br" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.556 Cannot find device "nvmf_tgt_br2" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:59.556 Cannot find device "nvmf_init_br" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:59.556 Cannot find device "nvmf_init_br2" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:59.556 Cannot find device "nvmf_tgt_br" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:59.556 Cannot find device "nvmf_tgt_br2" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:59.556 Cannot find device "nvmf_br" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:59.556 Cannot find device "nvmf_init_if" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:59.556 Cannot find device "nvmf_init_if2" 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:59.556 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:59.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:14:59.815 00:14:59.815 --- 10.0.0.3 ping statistics --- 00:14:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.815 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:59.815 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:59.815 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:59.815 00:14:59.815 --- 10.0.0.4 ping statistics --- 00:14:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.815 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:59.815 00:14:59.815 --- 10.0.0.1 ping statistics --- 00:14:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.815 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:59.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:59.815 00:14:59.815 --- 10.0.0.2 ping statistics --- 00:14:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.815 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75567 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75567 00:14:59.815 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75567 ']' 00:14:59.816 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.816 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.816 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.816 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.816 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.816 [2024-12-11 08:49:07.480360] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:14:59.816 [2024-12-11 08:49:07.480445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.074 [2024-12-11 08:49:07.630570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.074 [2024-12-11 08:49:07.670155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.074 [2024-12-11 08:49:07.670219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.074 [2024-12-11 08:49:07.670233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.074 [2024-12-11 08:49:07.670243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.074 [2024-12-11 08:49:07.670252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.074 [2024-12-11 08:49:07.671091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.074 [2024-12-11 08:49:07.671221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.074 [2024-12-11 08:49:07.671303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.074 [2024-12-11 08:49:07.671310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.074 [2024-12-11 08:49:07.705260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.074 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.074 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:00.074 08:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.332 [2024-12-11 08:49:07.981823] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.332 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:00.332 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.332 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:00.332 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:00.591 Malloc1 00:15:00.591 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.157 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.157 08:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:01.414 [2024-12-11 08:49:09.123533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:01.414 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.673 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.931 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:01.931 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:01.931 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:01.931 08:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:01.931 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:01.931 fio-3.35 00:15:01.931 Starting 1 thread 00:15:04.459 00:15:04.459 test: (groupid=0, jobs=1): err= 0: pid=75638: Wed Dec 11 08:49:11 2024 00:15:04.459 read: IOPS=8802, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec) 00:15:04.459 slat (nsec): min=1920, max=317727, avg=2658.68, stdev=3296.46 00:15:04.459 clat (usec): min=2502, max=14584, avg=7565.83, stdev=568.03 00:15:04.459 lat (usec): min=2558, max=14586, avg=7568.49, stdev=567.77 00:15:04.459 clat percentiles (usec): 00:15:04.459 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:15:04.459 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:15:04.459 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8356], 00:15:04.459 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[12649], 99.95th=[13566], 00:15:04.459 | 99.99th=[14615] 00:15:04.459 bw ( KiB/s): min=34552, max=35928, per=99.99%, avg=35206.00, stdev=589.12, samples=4 00:15:04.459 iops : min= 8638, max= 8982, avg=8801.50, stdev=147.28, samples=4 00:15:04.459 write: IOPS=8813, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec); 0 zone resets 00:15:04.459 slat (usec): min=2, max=242, avg= 2.79, stdev= 2.45 00:15:04.459 clat (usec): min=2365, max=13645, avg=6909.14, stdev=517.46 00:15:04.459 lat (usec): min=2379, max=13647, avg=6911.93, stdev=517.33 00:15:04.459 clat percentiles (usec): 00:15:04.459 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:15:04.459 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:15:04.459 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:15:04.459 | 99.00th=[ 8094], 99.50th=[ 8979], 99.90th=[12387], 99.95th=[13042], 00:15:04.459 | 99.99th=[13566] 00:15:04.459 bw ( KiB/s): min=34960, max=35568, per=99.99%, avg=35250.00, stdev=261.80, samples=4 00:15:04.459 iops : min= 8740, max= 8892, avg=8812.50, stdev=65.45, samples=4 00:15:04.459 lat (msec) : 4=0.09%, 10=99.61%, 20=0.30% 00:15:04.459 cpu : usr=70.49%, sys=22.33%, ctx=10, majf=0, minf=6 00:15:04.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:04.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.459 issued rwts: total=17666,17689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.459 00:15:04.459 Run status group 0 (all jobs): 00:15:04.459 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:15:04.459 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.5MB), run=2007-2007msec 00:15:04.459 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:04.459 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:04.459 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.459 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.459 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.459 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:04.460 08:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:04.460 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:04.460 fio-3.35 00:15:04.460 Starting 1 thread 00:15:06.990 00:15:06.990 test: (groupid=0, jobs=1): err= 0: pid=75686: Wed Dec 11 08:49:14 2024 00:15:06.990 read: IOPS=8254, BW=129MiB/s (135MB/s)(259MiB/2007msec) 00:15:06.990 slat (usec): min=3, max=126, avg= 3.87, stdev= 2.24 00:15:06.990 clat (usec): min=2984, max=17929, avg=8723.20, stdev=2710.45 00:15:06.990 lat (usec): min=2988, max=17932, avg=8727.07, stdev=2710.52 00:15:06.990 clat percentiles (usec): 00:15:06.990 | 1.00th=[ 4146], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6259], 00:15:06.990 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9110], 00:15:06.990 | 70.00th=[10028], 80.00th=[10814], 90.00th=[12387], 95.00th=[13960], 00:15:06.990 | 99.00th=[15795], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:15:06.990 | 99.99th=[17957] 00:15:06.990 bw ( KiB/s): min=57344, max=75040, per=50.80%, avg=67088.00, stdev=7958.61, samples=4 00:15:06.990 iops : min= 3584, max= 4690, avg=4193.00, stdev=497.41, samples=4 00:15:06.990 write: IOPS=4810, BW=75.2MiB/s (78.8MB/s)(137MiB/1828msec); 0 zone resets 00:15:06.990 slat (usec): min=33, max=358, avg=39.83, stdev= 8.96 00:15:06.990 clat (usec): min=4303, max=21968, avg=12135.71, stdev=2195.07 00:15:06.990 lat (usec): min=4338, max=22003, avg=12175.54, stdev=2195.33 00:15:06.990 clat percentiles (usec): 00:15:06.990 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:15:06.990 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:15:06.990 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15008], 95.00th=[16057], 00:15:06.990 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20055], 99.95th=[20317], 00:15:06.990 | 99.99th=[21890] 00:15:06.990 bw ( KiB/s): min=59648, max=76960, per=90.58%, avg=69720.00, stdev=7851.91, samples=4 00:15:06.990 iops : min= 3728, max= 4810, avg=4357.50, stdev=490.74, samples=4 00:15:06.990 lat (msec) : 4=0.41%, 10=50.07%, 20=49.46%, 50=0.06% 00:15:06.990 cpu : usr=83.65%, sys=12.56%, ctx=3, majf=0, minf=17 00:15:06.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:06.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.990 issued rwts: total=16566,8794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.990 00:15:06.990 Run status group 0 (all jobs): 00:15:06.990 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2007-2007msec 00:15:06.990 WRITE: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=137MiB (144MB), run=1828-1828msec 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.990 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:06.990 rmmod nvme_tcp 00:15:06.990 rmmod nvme_fabrics 00:15:07.249 rmmod nvme_keyring 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75567 ']' 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75567 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75567 ']' 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75567 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75567 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.249 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.249 killing process with pid 75567 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75567' 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75567 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75567 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:07.250 08:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:07.250 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:07.507 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:07.508 00:15:07.508 real 0m8.426s 00:15:07.508 user 0m33.712s 00:15:07.508 sys 0m2.248s 00:15:07.508 ************************************ 00:15:07.508 END TEST nvmf_fio_host 00:15:07.508 ************************************ 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.508 08:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:07.788 ************************************ 00:15:07.788 START TEST nvmf_failover 00:15:07.788 ************************************ 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:07.788 * Looking for test storage... 00:15:07.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:07.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.788 --rc genhtml_branch_coverage=1 00:15:07.788 --rc genhtml_function_coverage=1 00:15:07.788 --rc genhtml_legend=1 00:15:07.788 --rc geninfo_all_blocks=1 00:15:07.788 --rc geninfo_unexecuted_blocks=1 00:15:07.788 00:15:07.788 ' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:07.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.788 --rc genhtml_branch_coverage=1 00:15:07.788 --rc genhtml_function_coverage=1 00:15:07.788 --rc genhtml_legend=1 00:15:07.788 --rc geninfo_all_blocks=1 00:15:07.788 --rc geninfo_unexecuted_blocks=1 00:15:07.788 00:15:07.788 ' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:07.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.788 --rc genhtml_branch_coverage=1 00:15:07.788 --rc genhtml_function_coverage=1 00:15:07.788 --rc genhtml_legend=1 00:15:07.788 --rc geninfo_all_blocks=1 00:15:07.788 --rc geninfo_unexecuted_blocks=1 00:15:07.788 00:15:07.788 ' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:07.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.788 --rc genhtml_branch_coverage=1 00:15:07.788 --rc genhtml_function_coverage=1 00:15:07.788 --rc genhtml_legend=1 00:15:07.788 --rc geninfo_all_blocks=1 00:15:07.788 --rc geninfo_unexecuted_blocks=1 00:15:07.788 00:15:07.788 ' 00:15:07.788 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.789 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:07.789 Cannot find device "nvmf_init_br" 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:07.789 Cannot find device "nvmf_init_br2" 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:07.789 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:08.069 Cannot find device "nvmf_tgt_br" 00:15:08.069 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.070 Cannot find device "nvmf_tgt_br2" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:08.070 Cannot find device "nvmf_init_br" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:08.070 Cannot find device "nvmf_init_br2" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:08.070 Cannot find device "nvmf_tgt_br" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:08.070 Cannot find device "nvmf_tgt_br2" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:08.070 Cannot find device "nvmf_br" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:08.070 Cannot find device "nvmf_init_if" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:08.070 Cannot find device "nvmf_init_if2" 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.070 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:08.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:08.329 00:15:08.329 --- 10.0.0.3 ping statistics --- 00:15:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.329 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:08.329 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:08.329 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:08.329 00:15:08.329 --- 10.0.0.4 ping statistics --- 00:15:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.329 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:08.329 00:15:08.329 --- 10.0.0.1 ping statistics --- 00:15:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.329 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:08.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:08.329 00:15:08.329 --- 10.0.0.2 ping statistics --- 00:15:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.329 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75963 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75963 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75963 ']' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.329 08:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:08.329 [2024-12-11 08:49:15.962999] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:15:08.329 [2024-12-11 08:49:15.963129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.588 [2024-12-11 08:49:16.116779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.588 [2024-12-11 08:49:16.156431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.588 [2024-12-11 08:49:16.156486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.588 [2024-12-11 08:49:16.156500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.588 [2024-12-11 08:49:16.156516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.588 [2024-12-11 08:49:16.156525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.588 [2024-12-11 08:49:16.157417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.588 [2024-12-11 08:49:16.157506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.588 [2024-12-11 08:49:16.157514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.588 [2024-12-11 08:49:16.191150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.588 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:08.846 [2024-12-11 08:49:16.598897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.846 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:09.413 Malloc0 00:15:09.413 08:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.413 08:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.671 08:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:09.930 [2024-12-11 08:49:17.636348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.930 08:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:10.188 [2024-12-11 08:49:17.876546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:10.188 08:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:10.445 [2024-12-11 08:49:18.120768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=76014 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 76014 /var/tmp/bdevperf.sock 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76014 ']' 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.445 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:10.704 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.704 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:10.704 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:11.278 NVMe0n1 00:15:11.278 08:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:11.540 00:15:11.540 08:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=76030 00:15:11.540 08:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:11.540 08:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:12.474 08:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:12.733 [2024-12-11 08:49:20.404997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.733 [2024-12-11 08:49:20.405489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.405993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.406000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.406008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.406016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 [2024-12-11 08:49:20.406041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf97cf0 is same with the state(6) to be set 00:15:12.734 08:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:16.018 08:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:16.018 00:15:16.018 08:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:16.277 08:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:19.560 08:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:19.560 [2024-12-11 08:49:27.292969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:19.560 08:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:20.939 08:49:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:20.939 08:49:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 76030 00:15:27.513 { 00:15:27.513 "results": [ 00:15:27.513 { 00:15:27.513 "job": "NVMe0n1", 00:15:27.513 "core_mask": "0x1", 00:15:27.513 "workload": "verify", 00:15:27.513 "status": "finished", 00:15:27.513 "verify_range": { 00:15:27.513 "start": 0, 00:15:27.513 "length": 16384 00:15:27.513 }, 00:15:27.513 "queue_depth": 128, 00:15:27.513 "io_size": 4096, 00:15:27.513 "runtime": 15.008735, 00:15:27.513 "iops": 8993.429492891972, 00:15:27.513 "mibps": 35.130583956609264, 00:15:27.513 "io_failed": 3357, 00:15:27.513 "io_timeout": 0, 00:15:27.513 "avg_latency_us": 13854.548840637519, 00:15:27.513 "min_latency_us": 662.8072727272727, 00:15:27.513 "max_latency_us": 17873.454545454544 00:15:27.513 } 00:15:27.513 ], 00:15:27.513 "core_count": 1 00:15:27.513 } 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 76014 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76014 ']' 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76014 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76014 00:15:27.513 killing process with pid 76014 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76014' 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76014 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76014 00:15:27.513 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:27.513 [2024-12-11 08:49:18.194015] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:15:27.513 [2024-12-11 08:49:18.194117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76014 ] 00:15:27.513 [2024-12-11 08:49:18.347198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.513 [2024-12-11 08:49:18.386720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.513 [2024-12-11 08:49:18.420336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.513 Running I/O for 15 seconds... 00:15:27.513 6933.00 IOPS, 27.08 MiB/s [2024-12-11T08:49:35.287Z] [2024-12-11 08:49:20.406108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.513 [2024-12-11 08:49:20.406668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.513 [2024-12-11 08:49:20.406683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.406978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.406993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.407975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.407990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.408004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.408019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.408050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.408065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.514 [2024-12-11 08:49:20.408080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.514 [2024-12-11 08:49:20.408096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.408972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.408987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.515 [2024-12-11 08:49:20.409731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.409976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.409992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.515 [2024-12-11 08:49:20.410208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.515 [2024-12-11 08:49:20.410225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:20.410239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:20.410254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd29ac0 is same with the state(6) to be set 00:15:27.516 [2024-12-11 08:49:20.410272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:27.516 [2024-12-11 08:49:20.410282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:27.516 [2024-12-11 08:49:20.410293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:15:27.516 [2024-12-11 08:49:20.410309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:20.410361] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:27.516 [2024-12-11 08:49:20.410418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.516 [2024-12-11 08:49:20.410441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:20.410457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.516 [2024-12-11 08:49:20.410471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:20.410486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.516 [2024-12-11 08:49:20.410499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:20.410529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.516 [2024-12-11 08:49:20.410542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:20.410556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:27.516 [2024-12-11 08:49:20.414572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:27.516 [2024-12-11 08:49:20.414611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbac60 (9): Bad file descriptor 00:15:27.516 [2024-12-11 08:49:20.442923] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:27.516 7705.50 IOPS, 30.10 MiB/s [2024-12-11T08:49:35.290Z] 8251.67 IOPS, 32.23 MiB/s [2024-12-11T08:49:35.290Z] 8514.75 IOPS, 33.26 MiB/s [2024-12-11T08:49:35.290Z] [2024-12-11 08:49:24.013975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.516 [2024-12-11 08:49:24.014872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.014970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.014986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.015000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.015015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.015029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.015056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.516 [2024-12-11 08:49:24.015073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.516 [2024-12-11 08:49:24.015089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.015103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.015144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.015890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.015920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.015949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.015979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.015994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.517 [2024-12-11 08:49:24.016403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.517 [2024-12-11 08:49:24.016650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.517 [2024-12-11 08:49:24.016664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.016977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.016991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:24.017871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.017980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.017994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.018039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.018069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:24.018099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2d950 is same with the state(6) to be set 00:15:27.518 [2024-12-11 08:49:24.018167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:27.518 [2024-12-11 08:49:24.018179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:27.518 [2024-12-11 08:49:24.018190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:15:27.518 [2024-12-11 08:49:24.018204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018257] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:27.518 [2024-12-11 08:49:24.018316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.518 [2024-12-11 08:49:24.018339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.518 [2024-12-11 08:49:24.018372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.518 [2024-12-11 08:49:24.018399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.518 [2024-12-11 08:49:24.018443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:24.018458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:27.518 [2024-12-11 08:49:24.018492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbac60 (9): Bad file descriptor 00:15:27.518 [2024-12-11 08:49:24.022520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:27.518 [2024-12-11 08:49:24.048460] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:27.518 8572.20 IOPS, 33.49 MiB/s [2024-12-11T08:49:35.292Z] 8687.50 IOPS, 33.94 MiB/s [2024-12-11T08:49:35.292Z] 8769.86 IOPS, 34.26 MiB/s [2024-12-11T08:49:35.292Z] 8832.62 IOPS, 34.50 MiB/s [2024-12-11T08:49:35.292Z] 8864.56 IOPS, 34.63 MiB/s [2024-12-11T08:49:35.292Z] [2024-12-11 08:49:28.585418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.518 [2024-12-11 08:49:28.585731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:28.585759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:28.585787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.518 [2024-12-11 08:49:28.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.518 [2024-12-11 08:49:28.585831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.585844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.585858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.585872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.585887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.585900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.585915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.585928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.585943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.585956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.585971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.585992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.586910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.586982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.586995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.587939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.587970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.587986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.519 [2024-12-11 08:49:28.588205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.588236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.588266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.588296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.588326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.519 [2024-12-11 08:49:28.588350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.519 [2024-12-11 08:49:28.588366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.588717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.588973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.520 [2024-12-11 08:49:28.589239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.520 [2024-12-11 08:49:28.589449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b710 is same with the state(6) to be set 00:15:27.520 [2024-12-11 08:49:28.589481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:27.520 [2024-12-11 08:49:28.589492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:27.520 [2024-12-11 08:49:28.589503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26640 len:8 PRP1 0x0 PRP2 0x0 00:15:27.520 [2024-12-11 08:49:28.589517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589582] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:27.520 [2024-12-11 08:49:28.589639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.520 [2024-12-11 08:49:28.589671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.520 [2024-12-11 08:49:28.589701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.520 [2024-12-11 08:49:28.589729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.520 [2024-12-11 08:49:28.589761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.520 [2024-12-11 08:49:28.589775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:27.520 [2024-12-11 08:49:28.589811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbac60 (9): Bad file descriptor 00:15:27.520 [2024-12-11 08:49:28.593756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:27.520 [2024-12-11 08:49:28.621843] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:27.520 8861.20 IOPS, 34.61 MiB/s [2024-12-11T08:49:35.294Z] 8891.27 IOPS, 34.73 MiB/s [2024-12-11T08:49:35.294Z] 8925.67 IOPS, 34.87 MiB/s [2024-12-11T08:49:35.294Z] 8948.92 IOPS, 34.96 MiB/s [2024-12-11T08:49:35.294Z] 8977.14 IOPS, 35.07 MiB/s [2024-12-11T08:49:35.294Z] 8992.27 IOPS, 35.13 MiB/s 00:15:27.520 Latency(us) 00:15:27.520 [2024-12-11T08:49:35.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.520 Verification LBA range: start 0x0 length 0x4000 00:15:27.520 NVMe0n1 : 15.01 8993.43 35.13 223.67 0.00 13854.55 662.81 17873.45 00:15:27.520 [2024-12-11T08:49:35.294Z] =================================================================================================================== 00:15:27.520 [2024-12-11T08:49:35.294Z] Total : 8993.43 35.13 223.67 0.00 13854.55 662.81 17873.45 00:15:27.520 Received shutdown signal, test time was about 15.000000 seconds 00:15:27.520 00:15:27.520 Latency(us) 00:15:27.520 [2024-12-11T08:49:35.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.520 [2024-12-11T08:49:35.294Z] =================================================================================================================== 00:15:27.520 [2024-12-11T08:49:35.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76204 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76204 /var/tmp/bdevperf.sock 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76204 ']' 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:27.520 08:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:27.520 [2024-12-11 08:49:35.027090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:27.520 08:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:27.777 [2024-12-11 08:49:35.279320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:27.777 08:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.036 NVMe0n1 00:15:28.036 08:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.295 00:15:28.295 08:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.554 00:15:28.554 08:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:28.554 08:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:28.813 08:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.072 08:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:32.361 08:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:32.361 08:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:32.361 08:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76273 00:15:32.361 08:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.361 08:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76273 00:15:33.758 { 00:15:33.758 "results": [ 00:15:33.758 { 00:15:33.758 "job": "NVMe0n1", 00:15:33.758 "core_mask": "0x1", 00:15:33.758 "workload": "verify", 00:15:33.758 "status": "finished", 00:15:33.758 "verify_range": { 00:15:33.758 "start": 0, 00:15:33.758 "length": 16384 00:15:33.758 }, 00:15:33.758 "queue_depth": 128, 00:15:33.758 "io_size": 4096, 00:15:33.759 "runtime": 1.005271, 00:15:33.759 "iops": 6897.642526244167, 00:15:33.759 "mibps": 26.943916118141278, 00:15:33.759 "io_failed": 0, 00:15:33.759 "io_timeout": 0, 00:15:33.759 "avg_latency_us": 18483.97410179091, 00:15:33.759 "min_latency_us": 2353.338181818182, 00:15:33.759 "max_latency_us": 16205.265454545455 00:15:33.759 } 00:15:33.759 ], 00:15:33.759 "core_count": 1 00:15:33.759 } 00:15:33.759 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:33.759 [2024-12-11 08:49:34.502155] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:15:33.759 [2024-12-11 08:49:34.502258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76204 ] 00:15:33.759 [2024-12-11 08:49:34.650788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.759 [2024-12-11 08:49:34.683448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.759 [2024-12-11 08:49:34.713667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.759 [2024-12-11 08:49:36.736771] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:33.759 [2024-12-11 08:49:36.736902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.759 [2024-12-11 08:49:36.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.759 [2024-12-11 08:49:36.736961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.759 [2024-12-11 08:49:36.736975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.759 [2024-12-11 08:49:36.736988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.759 [2024-12-11 08:49:36.737001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.759 [2024-12-11 08:49:36.737014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.759 [2024-12-11 08:49:36.737027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.759 [2024-12-11 08:49:36.737040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:33.759 [2024-12-11 08:49:36.737087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:33.759 [2024-12-11 08:49:36.737116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fdc60 (9): Bad file descriptor 00:15:33.759 [2024-12-11 08:49:36.745588] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:33.759 Running I/O for 1 seconds... 00:15:33.759 6806.00 IOPS, 26.59 MiB/s 00:15:33.759 Latency(us) 00:15:33.759 [2024-12-11T08:49:41.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.759 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:33.759 Verification LBA range: start 0x0 length 0x4000 00:15:33.759 NVMe0n1 : 1.01 6897.64 26.94 0.00 0.00 18483.97 2353.34 16205.27 00:15:33.759 [2024-12-11T08:49:41.533Z] =================================================================================================================== 00:15:33.759 [2024-12-11T08:49:41.533Z] Total : 6897.64 26.94 0.00 0.00 18483.97 2353.34 16205.27 00:15:33.759 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:33.759 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:33.759 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.029 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.029 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:34.288 08:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.547 08:49:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 76204 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76204 ']' 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76204 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76204 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.832 killing process with pid 76204 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76204' 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76204 00:15:37.832 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76204 00:15:38.091 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:38.091 08:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.350 rmmod nvme_tcp 00:15:38.350 rmmod nvme_fabrics 00:15:38.350 rmmod nvme_keyring 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75963 ']' 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75963 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75963 ']' 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75963 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75963 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:38.350 killing process with pid 75963 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75963' 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75963 00:15:38.350 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75963 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:38.609 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:38.867 00:15:38.867 real 0m31.195s 00:15:38.867 user 2m0.416s 00:15:38.867 sys 0m5.466s 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.867 08:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:38.867 ************************************ 00:15:38.868 END TEST nvmf_failover 00:15:38.868 ************************************ 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.868 ************************************ 00:15:38.868 START TEST nvmf_host_discovery 00:15:38.868 ************************************ 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:38.868 * Looking for test storage... 00:15:38.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:15:38.868 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.128 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:39.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.128 --rc genhtml_branch_coverage=1 00:15:39.128 --rc genhtml_function_coverage=1 00:15:39.128 --rc genhtml_legend=1 00:15:39.128 --rc geninfo_all_blocks=1 00:15:39.128 --rc geninfo_unexecuted_blocks=1 00:15:39.129 00:15:39.129 ' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:39.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.129 --rc genhtml_branch_coverage=1 00:15:39.129 --rc genhtml_function_coverage=1 00:15:39.129 --rc genhtml_legend=1 00:15:39.129 --rc geninfo_all_blocks=1 00:15:39.129 --rc geninfo_unexecuted_blocks=1 00:15:39.129 00:15:39.129 ' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:39.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.129 --rc genhtml_branch_coverage=1 00:15:39.129 --rc genhtml_function_coverage=1 00:15:39.129 --rc genhtml_legend=1 00:15:39.129 --rc geninfo_all_blocks=1 00:15:39.129 --rc geninfo_unexecuted_blocks=1 00:15:39.129 00:15:39.129 ' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:39.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.129 --rc genhtml_branch_coverage=1 00:15:39.129 --rc genhtml_function_coverage=1 00:15:39.129 --rc genhtml_legend=1 00:15:39.129 --rc geninfo_all_blocks=1 00:15:39.129 --rc geninfo_unexecuted_blocks=1 00:15:39.129 00:15:39.129 ' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.129 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:39.129 Cannot find device "nvmf_init_br" 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:39.129 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:39.129 Cannot find device "nvmf_init_br2" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:39.130 Cannot find device "nvmf_tgt_br" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.130 Cannot find device "nvmf_tgt_br2" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:39.130 Cannot find device "nvmf_init_br" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:39.130 Cannot find device "nvmf_init_br2" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:39.130 Cannot find device "nvmf_tgt_br" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:39.130 Cannot find device "nvmf_tgt_br2" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:39.130 Cannot find device "nvmf_br" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:39.130 Cannot find device "nvmf_init_if" 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:39.130 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:39.389 Cannot find device "nvmf_init_if2" 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:39.389 08:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.389 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:39.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:39.390 00:15:39.390 --- 10.0.0.3 ping statistics --- 00:15:39.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.390 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:39.390 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:39.390 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:15:39.390 00:15:39.390 --- 10.0.0.4 ping statistics --- 00:15:39.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.390 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:39.390 00:15:39.390 --- 10.0.0.1 ping statistics --- 00:15:39.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.390 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:39.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:39.390 00:15:39.390 --- 10.0.0.2 ping statistics --- 00:15:39.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.390 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76598 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76598 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76598 ']' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.390 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.649 [2024-12-11 08:49:47.203121] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:15:39.649 [2024-12-11 08:49:47.203207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.649 [2024-12-11 08:49:47.345565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.649 [2024-12-11 08:49:47.374982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.649 [2024-12-11 08:49:47.375317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.649 [2024-12-11 08:49:47.375472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.649 [2024-12-11 08:49:47.375615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.649 [2024-12-11 08:49:47.375665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.649 [2024-12-11 08:49:47.376074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.649 [2024-12-11 08:49:47.404344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.908 [2024-12-11 08:49:47.500755] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.908 [2024-12-11 08:49:47.508868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:39.908 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.909 null0 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.909 null1 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.909 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76621 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76621 /tmp/host.sock 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76621 ']' 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.909 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.909 [2024-12-11 08:49:47.593575] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:15:39.909 [2024-12-11 08:49:47.594022] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76621 ] 00:15:40.167 [2024-12-11 08:49:47.738631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.167 [2024-12-11 08:49:47.769999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.167 [2024-12-11 08:49:47.799855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:40.167 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.168 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.427 08:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.427 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.687 [2024-12-11 08:49:48.241062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.687 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.946 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:15:40.946 08:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:15:41.205 [2024-12-11 08:49:48.884113] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:41.205 [2024-12-11 08:49:48.884422] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:41.205 [2024-12-11 08:49:48.884466] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:41.205 [2024-12-11 08:49:48.890184] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:41.205 [2024-12-11 08:49:48.944596] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:41.205 [2024-12-11 08:49:48.945478] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1cbadc0:1 started. 00:15:41.205 [2024-12-11 08:49:48.947276] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:41.205 [2024-12-11 08:49:48.947481] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:41.205 [2024-12-11 08:49:48.952664] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1cbadc0 was disconnected and freed. delete nvme_qpair. 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:41.773 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:42.032 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.033 [2024-12-11 08:49:49.726336] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1cc90b0:1 started. 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.033 [2024-12-11 08:49:49.733327] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1cc90b0 was disconnected and freed. delete nvme_qpair. 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.033 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.292 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 [2024-12-11 08:49:49.838513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:42.293 [2024-12-11 08:49:49.838843] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:42.293 [2024-12-11 08:49:49.838877] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:42.293 [2024-12-11 08:49:49.844848] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 [2024-12-11 08:49:49.908457] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:42.293 [2024-12-11 08:49:49.908560] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:42.293 [2024-12-11 08:49:49.908572] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:42.293 [2024-12-11 08:49:49.908577] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 08:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.293 [2024-12-11 08:49:50.059695] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:42.293 [2024-12-11 08:49:50.059764] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.293 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.293 [2024-12-11 08:49:50.064748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.553 [2024-12-11 08:49:50.064797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.553 [2024-12-11 08:49:50.064811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.553 [2024-12-11 08:49:50.064820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.553 [2024-12-11 08:49:50.064830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.553 [2024-12-11 08:49:50.064839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.553 [2024-12-11 08:49:50.064849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.553 [2024-12-11 08:49:50.064859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.553 [2024-12-11 08:49:50.064868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c96fb0 is same with the state(6) to be set 00:15:42.553 [2024-12-11 08:49:50.065703] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:42.553 [2024-12-11 08:49:50.065735] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:42.553 [2024-12-11 08:49:50.065794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c96fb0 (9): Bad file descriptor 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:15:42.553 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.554 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.813 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:42.813 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.813 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:42.813 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.814 08:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.812 [2024-12-11 08:49:51.474848] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:43.812 [2024-12-11 08:49:51.474880] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:43.812 [2024-12-11 08:49:51.474916] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:43.812 [2024-12-11 08:49:51.480879] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:43.812 [2024-12-11 08:49:51.539222] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:43.812 [2024-12-11 08:49:51.539903] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1ca25c0:1 started. 00:15:43.812 [2024-12-11 08:49:51.542065] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:43.812 [2024-12-11 08:49:51.542112] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:43.812 [2024-12-11 08:49:51.544011] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1ca25c0 was disconnected and freed. delete nvme_qpair. 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.812 request: 00:15:43.812 { 00:15:43.812 "name": "nvme", 00:15:43.812 "trtype": "tcp", 00:15:43.812 "traddr": "10.0.0.3", 00:15:43.812 "adrfam": "ipv4", 00:15:43.812 "trsvcid": "8009", 00:15:43.812 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:43.812 "wait_for_attach": true, 00:15:43.812 "method": "bdev_nvme_start_discovery", 00:15:43.812 "req_id": 1 00:15:43.812 } 00:15:43.812 Got JSON-RPC error response 00:15:43.812 response: 00:15:43.812 { 00:15:43.812 "code": -17, 00:15:43.812 "message": "File exists" 00:15:43.812 } 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.812 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.072 request: 00:15:44.072 { 00:15:44.072 "name": "nvme_second", 00:15:44.072 "trtype": "tcp", 00:15:44.072 "traddr": "10.0.0.3", 00:15:44.072 "adrfam": "ipv4", 00:15:44.072 "trsvcid": "8009", 00:15:44.072 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:44.072 "wait_for_attach": true, 00:15:44.072 "method": "bdev_nvme_start_discovery", 00:15:44.072 "req_id": 1 00:15:44.072 } 00:15:44.072 Got JSON-RPC error response 00:15:44.072 response: 00:15:44.072 { 00:15:44.072 "code": -17, 00:15:44.072 "message": "File exists" 00:15:44.072 } 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.072 08:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.451 [2024-12-11 08:49:52.795038] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:45.451 [2024-12-11 08:49:52.795127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb53d0 with addr=10.0.0.3, port=8010 00:15:45.451 [2024-12-11 08:49:52.795161] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:45.451 [2024-12-11 08:49:52.795173] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:45.451 [2024-12-11 08:49:52.795182] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:46.388 [2024-12-11 08:49:53.795009] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:46.388 [2024-12-11 08:49:53.795104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccabc0 with addr=10.0.0.3, port=8010 00:15:46.388 [2024-12-11 08:49:53.795126] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:46.388 [2024-12-11 08:49:53.795148] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:46.388 [2024-12-11 08:49:53.795160] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:47.326 [2024-12-11 08:49:54.794895] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:47.326 request: 00:15:47.326 { 00:15:47.326 "name": "nvme_second", 00:15:47.326 "trtype": "tcp", 00:15:47.326 "traddr": "10.0.0.3", 00:15:47.326 "adrfam": "ipv4", 00:15:47.326 "trsvcid": "8010", 00:15:47.326 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:47.326 "wait_for_attach": false, 00:15:47.326 "attach_timeout_ms": 3000, 00:15:47.326 "method": "bdev_nvme_start_discovery", 00:15:47.326 "req_id": 1 00:15:47.326 } 00:15:47.326 Got JSON-RPC error response 00:15:47.326 response: 00:15:47.326 { 00:15:47.326 "code": -110, 00:15:47.326 "message": "Connection timed out" 00:15:47.326 } 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76621 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.326 rmmod nvme_tcp 00:15:47.326 rmmod nvme_fabrics 00:15:47.326 rmmod nvme_keyring 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76598 ']' 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76598 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76598 ']' 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76598 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.326 08:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76598 00:15:47.326 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.326 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.326 killing process with pid 76598 00:15:47.326 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76598' 00:15:47.326 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76598 00:15:47.326 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76598 00:15:47.584 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.585 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:47.844 00:15:47.844 real 0m8.833s 00:15:47.844 user 0m16.906s 00:15:47.844 sys 0m1.834s 00:15:47.844 ************************************ 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.844 END TEST nvmf_host_discovery 00:15:47.844 ************************************ 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.844 ************************************ 00:15:47.844 START TEST nvmf_host_multipath_status 00:15:47.844 ************************************ 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:47.844 * Looking for test storage... 00:15:47.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:47.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.844 --rc genhtml_branch_coverage=1 00:15:47.844 --rc genhtml_function_coverage=1 00:15:47.844 --rc genhtml_legend=1 00:15:47.844 --rc geninfo_all_blocks=1 00:15:47.844 --rc geninfo_unexecuted_blocks=1 00:15:47.844 00:15:47.844 ' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:47.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.844 --rc genhtml_branch_coverage=1 00:15:47.844 --rc genhtml_function_coverage=1 00:15:47.844 --rc genhtml_legend=1 00:15:47.844 --rc geninfo_all_blocks=1 00:15:47.844 --rc geninfo_unexecuted_blocks=1 00:15:47.844 00:15:47.844 ' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:47.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.844 --rc genhtml_branch_coverage=1 00:15:47.844 --rc genhtml_function_coverage=1 00:15:47.844 --rc genhtml_legend=1 00:15:47.844 --rc geninfo_all_blocks=1 00:15:47.844 --rc geninfo_unexecuted_blocks=1 00:15:47.844 00:15:47.844 ' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:47.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.844 --rc genhtml_branch_coverage=1 00:15:47.844 --rc genhtml_function_coverage=1 00:15:47.844 --rc genhtml_legend=1 00:15:47.844 --rc geninfo_all_blocks=1 00:15:47.844 --rc geninfo_unexecuted_blocks=1 00:15:47.844 00:15:47.844 ' 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.844 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:48.104 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.105 Cannot find device "nvmf_init_br" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.105 Cannot find device "nvmf_init_br2" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.105 Cannot find device "nvmf_tgt_br" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.105 Cannot find device "nvmf_tgt_br2" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.105 Cannot find device "nvmf_init_br" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.105 Cannot find device "nvmf_init_br2" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.105 Cannot find device "nvmf_tgt_br" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.105 Cannot find device "nvmf_tgt_br2" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.105 Cannot find device "nvmf_br" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.105 Cannot find device "nvmf_init_if" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.105 Cannot find device "nvmf_init_if2" 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.105 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:48.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:48.365 00:15:48.365 --- 10.0.0.3 ping statistics --- 00:15:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.365 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:48.365 08:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:48.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:48.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:15:48.365 00:15:48.365 --- 10.0.0.4 ping statistics --- 00:15:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.365 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:48.365 00:15:48.365 --- 10.0.0.1 ping statistics --- 00:15:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.365 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:48.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:48.365 00:15:48.365 --- 10.0.0.2 ping statistics --- 00:15:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.365 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=77113 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 77113 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77113 ']' 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.365 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:48.365 [2024-12-11 08:49:56.106972] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:15:48.365 [2024-12-11 08:49:56.107280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.624 [2024-12-11 08:49:56.264020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.624 [2024-12-11 08:49:56.303147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.624 [2024-12-11 08:49:56.303396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.624 [2024-12-11 08:49:56.303510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.624 [2024-12-11 08:49:56.303602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.624 [2024-12-11 08:49:56.303687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.624 [2024-12-11 08:49:56.304675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.624 [2024-12-11 08:49:56.304822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.624 [2024-12-11 08:49:56.339206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:48.624 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.624 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:48.624 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:48.624 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.624 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:48.883 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.883 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77113 00:15:48.883 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:49.142 [2024-12-11 08:49:56.723297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.142 08:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:49.401 Malloc0 00:15:49.401 08:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:49.660 08:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.919 08:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:50.177 [2024-12-11 08:49:57.807891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.178 08:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:50.436 [2024-12-11 08:49:58.040021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:50.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77161 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77161 /var/tmp/bdevperf.sock 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77161 ']' 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.436 08:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:51.373 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.373 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:51.373 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:51.632 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:51.891 Nvme0n1 00:15:51.891 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:52.458 Nvme0n1 00:15:52.458 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:52.458 08:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:54.370 08:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:54.370 08:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:54.629 08:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:54.888 08:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.266 08:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:56.525 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:56.525 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:56.525 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.525 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:56.784 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.784 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:56.784 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.784 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:57.352 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.352 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:57.352 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.352 08:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:57.611 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.611 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:57.611 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.611 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:57.869 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.869 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:57.869 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:58.127 08:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:58.387 08:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:59.323 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:59.323 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:59.324 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.324 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:59.891 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:59.891 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:59.892 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:59.892 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.151 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.151 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:00.151 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.151 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:00.409 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.409 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:00.410 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.410 08:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:00.668 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.668 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:00.668 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:00.669 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.927 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.927 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:00.927 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:00.927 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.186 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.186 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:01.186 08:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:01.444 08:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:01.704 08:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:02.640 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:02.640 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:02.640 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.640 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:02.899 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.899 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:02.899 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.899 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:03.158 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:03.158 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:03.158 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.158 08:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:03.417 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.417 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:03.417 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.417 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.009 08:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:04.576 08:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.576 08:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:04.576 08:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:04.835 08:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:05.093 08:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:06.029 08:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:06.029 08:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:06.029 08:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.029 08:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:06.288 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.288 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:06.288 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.288 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:06.856 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:06.856 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:06.856 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.856 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.115 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.115 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:07.115 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.115 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:07.374 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.374 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:07.374 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.374 08:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:07.632 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.632 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:07.632 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.632 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:07.895 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.895 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:07.895 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:08.153 08:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:08.411 08:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:09.346 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:09.346 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:09.346 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.346 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:09.604 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:09.604 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:09.604 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.604 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:09.863 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:09.863 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:09.863 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.863 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:10.122 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.122 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:10.122 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:10.122 08:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.689 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.256 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.256 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:11.256 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:11.256 08:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:11.515 08:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.892 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:13.151 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.151 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:13.151 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.151 08:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:13.409 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.409 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:13.409 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.410 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:13.668 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.668 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:13.668 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.668 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:13.962 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:13.962 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:13.962 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:13.962 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.247 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.247 08:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:14.506 08:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:14.506 08:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:14.765 08:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:15.024 08:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:15.960 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:15.960 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:15.960 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.960 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.219 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.219 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:16.219 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.219 08:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.477 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.477 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:16.477 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.477 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:16.736 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.736 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:16.736 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:16.736 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.994 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.994 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:16.994 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.994 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.253 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.253 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.253 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.253 08:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.819 08:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.819 08:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:17.819 08:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:17.819 08:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:18.387 08:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:19.322 08:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:19.322 08:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:19.322 08:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.322 08:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:19.580 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.580 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:19.580 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:19.580 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.838 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.838 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:19.838 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.838 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.096 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.096 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.096 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.096 08:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.355 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.355 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.355 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.355 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.613 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.613 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:20.613 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.613 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:20.872 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.872 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:20.872 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:21.130 08:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:21.389 08:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:22.324 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:22.324 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:22.324 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.324 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:22.583 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.583 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:22.583 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.583 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:22.841 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.841 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:22.841 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.841 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.408 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.408 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.408 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.408 08:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:23.408 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.408 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:23.408 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.408 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:23.683 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.683 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:23.683 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.683 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:23.951 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.951 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:23.951 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:24.210 08:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:24.780 08:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:25.717 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:25.717 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:25.717 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.717 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:25.975 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.975 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:25.975 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.975 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:26.233 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.233 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:26.233 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.233 08:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:26.491 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.491 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:26.491 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:26.491 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.750 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.750 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:26.750 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.750 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.009 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.009 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:27.009 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.009 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77161 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77161 ']' 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77161 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.268 08:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77161 00:16:27.268 killing process with pid 77161 00:16:27.268 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.268 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.268 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77161' 00:16:27.268 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77161 00:16:27.268 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77161 00:16:27.268 { 00:16:27.268 "results": [ 00:16:27.268 { 00:16:27.268 "job": "Nvme0n1", 00:16:27.268 "core_mask": "0x4", 00:16:27.268 "workload": "verify", 00:16:27.268 "status": "terminated", 00:16:27.268 "verify_range": { 00:16:27.268 "start": 0, 00:16:27.268 "length": 16384 00:16:27.268 }, 00:16:27.268 "queue_depth": 128, 00:16:27.268 "io_size": 4096, 00:16:27.268 "runtime": 34.897562, 00:16:27.268 "iops": 8897.097166845066, 00:16:27.268 "mibps": 34.75428580798854, 00:16:27.268 "io_failed": 0, 00:16:27.268 "io_timeout": 0, 00:16:27.268 "avg_latency_us": 14356.532414924704, 00:16:27.268 "min_latency_us": 1161.7745454545454, 00:16:27.268 "max_latency_us": 4057035.869090909 00:16:27.268 } 00:16:27.268 ], 00:16:27.268 "core_count": 1 00:16:27.268 } 00:16:27.534 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77161 00:16:27.534 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:27.534 [2024-12-11 08:49:58.108038] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:16:27.534 [2024-12-11 08:49:58.108123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77161 ] 00:16:27.534 [2024-12-11 08:49:58.253693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.534 [2024-12-11 08:49:58.285459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.534 [2024-12-11 08:49:58.316245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.534 Running I/O for 90 seconds... 00:16:27.534 9351.00 IOPS, 36.53 MiB/s [2024-12-11T08:50:35.308Z] 9484.50 IOPS, 37.05 MiB/s [2024-12-11T08:50:35.308Z] 9504.33 IOPS, 37.13 MiB/s [2024-12-11T08:50:35.308Z] 9510.25 IOPS, 37.15 MiB/s [2024-12-11T08:50:35.308Z] 9495.40 IOPS, 37.09 MiB/s [2024-12-11T08:50:35.308Z] 9441.00 IOPS, 36.88 MiB/s [2024-12-11T08:50:35.308Z] 9463.71 IOPS, 36.97 MiB/s [2024-12-11T08:50:35.308Z] 9454.88 IOPS, 36.93 MiB/s [2024-12-11T08:50:35.308Z] 9457.67 IOPS, 36.94 MiB/s [2024-12-11T08:50:35.308Z] 9482.20 IOPS, 37.04 MiB/s [2024-12-11T08:50:35.308Z] 9481.82 IOPS, 37.04 MiB/s [2024-12-11T08:50:35.308Z] 9468.25 IOPS, 36.99 MiB/s [2024-12-11T08:50:35.308Z] 9476.54 IOPS, 37.02 MiB/s [2024-12-11T08:50:35.308Z] 9495.64 IOPS, 37.09 MiB/s [2024-12-11T08:50:35.308Z] 9487.53 IOPS, 37.06 MiB/s [2024-12-11T08:50:35.308Z] [2024-12-11 08:50:15.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.534 [2024-12-11 08:50:15.734850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.734886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.734937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.734972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.734992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.735007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.735029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.735072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.735108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.735126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.735161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.735181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.735204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.735220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:27.534 [2024-12-11 08:50:15.735243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.534 [2024-12-11 08:50:15.735259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.735538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.735553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.737745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.737796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.737825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.737854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.737893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.737914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.737928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.737948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.737962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.737982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.737997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.738376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.738953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.738967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.739694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.739984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.739998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.535 [2024-12-11 08:50:15.743900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.743970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.743990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.744005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.744025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.744039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.744059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.535 [2024-12-11 08:50:15.744074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:27.535 [2024-12-11 08:50:15.744094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.744518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.744533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.745975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.745990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.746428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.746970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.746992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.747007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.536 [2024-12-11 08:50:15.747071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.536 [2024-12-11 08:50:15.747522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.536 [2024-12-11 08:50:15.747543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.747558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.747594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.747629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.747669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.747704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.747740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.747776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.747812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.747854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.747890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.747936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.747975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.747996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.748652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.748949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.748989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.749580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.749967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.749982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.750003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.750017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.750038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.750053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.750075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.750090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.750111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.750126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.751457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.537 [2024-12-11 08:50:15.751487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.751532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.751569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.751604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.751620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:27.537 [2024-12-11 08:50:15.751642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.537 [2024-12-11 08:50:15.751657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.751965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.751987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.752699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.752714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.753130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.753743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.753782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.753820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.753856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.753885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.753901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.760943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.760980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.538 [2024-12-11 08:50:15.761490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.761978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.762014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.762029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.762051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.538 [2024-12-11 08:50:15.762066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:27.538 [2024-12-11 08:50:15.762087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.762770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.762945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.762962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.763424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.763972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.763987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.764022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.539 [2024-12-11 08:50:15.764058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:27.539 [2024-12-11 08:50:15.764843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.539 [2024-12-11 08:50:15.764857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.764878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.764892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.764913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.764935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.764957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.764972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.764993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:15.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:15.765937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:27.540 9257.06 IOPS, 36.16 MiB/s [2024-12-11T08:50:35.314Z] 8712.53 IOPS, 34.03 MiB/s [2024-12-11T08:50:35.314Z] 8228.50 IOPS, 32.14 MiB/s [2024-12-11T08:50:35.314Z] 7795.42 IOPS, 30.45 MiB/s [2024-12-11T08:50:35.314Z] 7583.00 IOPS, 29.62 MiB/s [2024-12-11T08:50:35.314Z] 7667.62 IOPS, 29.95 MiB/s [2024-12-11T08:50:35.314Z] 7740.36 IOPS, 30.24 MiB/s [2024-12-11T08:50:35.314Z] 7922.26 IOPS, 30.95 MiB/s [2024-12-11T08:50:35.314Z] 8106.79 IOPS, 31.67 MiB/s [2024-12-11T08:50:35.314Z] 8279.64 IOPS, 32.34 MiB/s [2024-12-11T08:50:35.314Z] 8379.50 IOPS, 32.73 MiB/s [2024-12-11T08:50:35.314Z] 8418.48 IOPS, 32.88 MiB/s [2024-12-11T08:50:35.314Z] 8448.39 IOPS, 33.00 MiB/s [2024-12-11T08:50:35.314Z] 8493.90 IOPS, 33.18 MiB/s [2024-12-11T08:50:35.314Z] 8622.80 IOPS, 33.68 MiB/s [2024-12-11T08:50:35.314Z] 8742.13 IOPS, 34.15 MiB/s [2024-12-11T08:50:35.314Z] 8850.31 IOPS, 34.57 MiB/s [2024-12-11T08:50:35.314Z] [2024-12-11 08:50:32.229696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.229761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.231723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.231757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.231964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.231978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.540 [2024-12-11 08:50:32.232773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:27.540 [2024-12-11 08:50:32.232878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.540 [2024-12-11 08:50:32.232892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:27.540 8874.18 IOPS, 34.66 MiB/s [2024-12-11T08:50:35.314Z] 8889.18 IOPS, 34.72 MiB/s [2024-12-11T08:50:35.314Z] Received shutdown signal, test time was about 34.898358 seconds 00:16:27.540 00:16:27.540 Latency(us) 00:16:27.540 [2024-12-11T08:50:35.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.540 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:27.540 Verification LBA range: start 0x0 length 0x4000 00:16:27.540 Nvme0n1 : 34.90 8897.10 34.75 0.00 0.00 14356.53 1161.77 4057035.87 00:16:27.540 [2024-12-11T08:50:35.314Z] =================================================================================================================== 00:16:27.540 [2024-12-11T08:50:35.314Z] Total : 8897.10 34.75 0.00 0.00 14356.53 1161.77 4057035.87 00:16:27.540 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.799 rmmod nvme_tcp 00:16:27.799 rmmod nvme_fabrics 00:16:27.799 rmmod nvme_keyring 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 77113 ']' 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 77113 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77113 ']' 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77113 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77113 00:16:27.799 killing process with pid 77113 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77113' 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77113 00:16:27.799 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77113 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:28.058 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:28.317 00:16:28.317 real 0m40.547s 00:16:28.317 user 2m12.334s 00:16:28.317 sys 0m11.259s 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.317 08:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:28.317 ************************************ 00:16:28.317 END TEST nvmf_host_multipath_status 00:16:28.317 ************************************ 00:16:28.317 08:50:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:28.317 08:50:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.317 08:50:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.317 08:50:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.317 ************************************ 00:16:28.317 START TEST nvmf_discovery_remove_ifc 00:16:28.317 ************************************ 00:16:28.317 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:28.577 * Looking for test storage... 00:16:28.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.577 --rc genhtml_branch_coverage=1 00:16:28.577 --rc genhtml_function_coverage=1 00:16:28.577 --rc genhtml_legend=1 00:16:28.577 --rc geninfo_all_blocks=1 00:16:28.577 --rc geninfo_unexecuted_blocks=1 00:16:28.577 00:16:28.577 ' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.577 --rc genhtml_branch_coverage=1 00:16:28.577 --rc genhtml_function_coverage=1 00:16:28.577 --rc genhtml_legend=1 00:16:28.577 --rc geninfo_all_blocks=1 00:16:28.577 --rc geninfo_unexecuted_blocks=1 00:16:28.577 00:16:28.577 ' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.577 --rc genhtml_branch_coverage=1 00:16:28.577 --rc genhtml_function_coverage=1 00:16:28.577 --rc genhtml_legend=1 00:16:28.577 --rc geninfo_all_blocks=1 00:16:28.577 --rc geninfo_unexecuted_blocks=1 00:16:28.577 00:16:28.577 ' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.577 --rc genhtml_branch_coverage=1 00:16:28.577 --rc genhtml_function_coverage=1 00:16:28.577 --rc genhtml_legend=1 00:16:28.577 --rc geninfo_all_blocks=1 00:16:28.577 --rc geninfo_unexecuted_blocks=1 00:16:28.577 00:16:28.577 ' 00:16:28.577 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.578 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:28.578 Cannot find device "nvmf_init_br" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:28.578 Cannot find device "nvmf_init_br2" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:28.578 Cannot find device "nvmf_tgt_br" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.578 Cannot find device "nvmf_tgt_br2" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:28.578 Cannot find device "nvmf_init_br" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:28.578 Cannot find device "nvmf_init_br2" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:28.578 Cannot find device "nvmf_tgt_br" 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:28.578 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:28.578 Cannot find device "nvmf_tgt_br2" 00:16:28.579 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:28.579 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:28.579 Cannot find device "nvmf_br" 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:28.838 Cannot find device "nvmf_init_if" 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:28.838 Cannot find device "nvmf_init_if2" 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.838 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:28.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:28.839 00:16:28.839 --- 10.0.0.3 ping statistics --- 00:16:28.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.839 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:28.839 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:29.098 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:29.098 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:16:29.098 00:16:29.098 --- 10.0.0.4 ping statistics --- 00:16:29.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.098 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:29.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:29.098 00:16:29.098 --- 10.0.0.1 ping statistics --- 00:16:29.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.098 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:29.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:16:29.098 00:16:29.098 --- 10.0.0.2 ping statistics --- 00:16:29.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.098 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=78019 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 78019 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 78019 ']' 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.098 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.098 [2024-12-11 08:50:36.708559] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:16:29.098 [2024-12-11 08:50:36.709305] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.098 [2024-12-11 08:50:36.863436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.358 [2024-12-11 08:50:36.901716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.358 [2024-12-11 08:50:36.901951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.358 [2024-12-11 08:50:36.902057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.358 [2024-12-11 08:50:36.902077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.358 [2024-12-11 08:50:36.902087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.358 [2024-12-11 08:50:36.902482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.358 [2024-12-11 08:50:36.937183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.358 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.358 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:29.358 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.358 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.358 08:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.358 [2024-12-11 08:50:37.041354] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.358 [2024-12-11 08:50:37.049507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:29.358 null0 00:16:29.358 [2024-12-11 08:50:37.081405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=78042 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 78042 /tmp/host.sock 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 78042 ']' 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.358 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.358 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.617 [2024-12-11 08:50:37.160813] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:16:29.617 [2024-12-11 08:50:37.160903] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78042 ] 00:16:29.617 [2024-12-11 08:50:37.307359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.617 [2024-12-11 08:50:37.347795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.876 [2024-12-11 08:50:37.481580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.876 08:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.813 [2024-12-11 08:50:38.524451] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:30.813 [2024-12-11 08:50:38.524484] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:30.813 [2024-12-11 08:50:38.524524] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:30.813 [2024-12-11 08:50:38.530509] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:30.813 [2024-12-11 08:50:38.584959] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:31.072 [2024-12-11 08:50:38.586011] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ab5fb0:1 started. 00:16:31.072 [2024-12-11 08:50:38.587894] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:31.072 [2024-12-11 08:50:38.587955] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:31.072 [2024-12-11 08:50:38.587985] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:31.072 [2024-12-11 08:50:38.588003] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:31.072 [2024-12-11 08:50:38.588029] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.072 [2024-12-11 08:50:38.593124] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ab5fb0 was disconnected and freed. delete nvme_qpair. 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:31.072 08:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.009 08:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.387 08:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:34.337 08:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.273 08:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:36.208 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.467 08:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.467 [2024-12-11 08:50:44.015691] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:36.467 [2024-12-11 08:50:44.015762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.467 [2024-12-11 08:50:44.015777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.467 [2024-12-11 08:50:44.015789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.467 [2024-12-11 08:50:44.015798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.467 [2024-12-11 08:50:44.015812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.467 [2024-12-11 08:50:44.015822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.467 [2024-12-11 08:50:44.015832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.467 [2024-12-11 08:50:44.015840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.467 [2024-12-11 08:50:44.015850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.467 [2024-12-11 08:50:44.015858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.467 [2024-12-11 08:50:44.015867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaee20 is same with the state(6) to be set 00:16:36.467 08:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.467 08:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.467 [2024-12-11 08:50:44.025688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaee20 (9): Bad file descriptor 00:16:36.467 [2024-12-11 08:50:44.035704] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:36.467 [2024-12-11 08:50:44.035742] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:36.467 [2024-12-11 08:50:44.035749] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:36.467 [2024-12-11 08:50:44.035755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:36.467 [2024-12-11 08:50:44.035805] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.406 [2024-12-11 08:50:45.067198] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:37.406 [2024-12-11 08:50:45.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aaee20 with addr=10.0.0.3, port=4420 00:16:37.406 [2024-12-11 08:50:45.067300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaee20 is same with the state(6) to be set 00:16:37.406 [2024-12-11 08:50:45.067346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaee20 (9): Bad file descriptor 00:16:37.406 [2024-12-11 08:50:45.068002] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:37.406 [2024-12-11 08:50:45.068088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:37.406 [2024-12-11 08:50:45.068111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:37.406 [2024-12-11 08:50:45.068129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:37.406 [2024-12-11 08:50:45.068185] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:37.406 [2024-12-11 08:50:45.068199] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:37.406 [2024-12-11 08:50:45.068211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:37.406 [2024-12-11 08:50:45.068230] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:37.406 [2024-12-11 08:50:45.068241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.406 08:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.342 [2024-12-11 08:50:46.068289] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:38.342 [2024-12-11 08:50:46.068338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:38.342 [2024-12-11 08:50:46.068362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:38.342 [2024-12-11 08:50:46.068388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:38.342 [2024-12-11 08:50:46.068398] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:38.342 [2024-12-11 08:50:46.068407] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:38.342 [2024-12-11 08:50:46.068413] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:38.342 [2024-12-11 08:50:46.068418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:38.342 [2024-12-11 08:50:46.068449] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:38.342 [2024-12-11 08:50:46.068489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.342 [2024-12-11 08:50:46.068502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.342 [2024-12-11 08:50:46.068515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.342 [2024-12-11 08:50:46.068523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.342 [2024-12-11 08:50:46.068532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.342 [2024-12-11 08:50:46.068540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.342 [2024-12-11 08:50:46.068549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.342 [2024-12-11 08:50:46.068557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.342 [2024-12-11 08:50:46.068582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.343 [2024-12-11 08:50:46.068606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.343 [2024-12-11 08:50:46.068631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:38.343 [2024-12-11 08:50:46.068679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3aa20 (9): Bad file descriptor 00:16:38.343 [2024-12-11 08:50:46.069661] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:38.343 [2024-12-11 08:50:46.069703] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.343 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:38.602 08:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:39.538 08:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:40.474 [2024-12-11 08:50:48.078651] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:40.474 [2024-12-11 08:50:48.078681] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:40.474 [2024-12-11 08:50:48.078700] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:40.474 [2024-12-11 08:50:48.084683] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:40.474 [2024-12-11 08:50:48.139021] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:40.474 [2024-12-11 08:50:48.139763] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1ace060:1 started. 00:16:40.474 [2024-12-11 08:50:48.141080] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:40.474 [2024-12-11 08:50:48.141139] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:40.474 [2024-12-11 08:50:48.141173] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:40.474 [2024-12-11 08:50:48.141190] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:40.474 [2024-12-11 08:50:48.141199] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:40.474 [2024-12-11 08:50:48.147483] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1ace060 was disconnected and freed. delete nvme_qpair. 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 78042 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 78042 ']' 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 78042 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78042 00:16:40.733 killing process with pid 78042 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78042' 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 78042 00:16:40.733 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 78042 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:40.993 rmmod nvme_tcp 00:16:40.993 rmmod nvme_fabrics 00:16:40.993 rmmod nvme_keyring 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 78019 ']' 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 78019 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 78019 ']' 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 78019 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78019 00:16:40.993 killing process with pid 78019 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78019' 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 78019 00:16:40.993 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 78019 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:41.252 08:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:41.252 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:41.252 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:41.511 00:16:41.511 real 0m13.064s 00:16:41.511 user 0m22.297s 00:16:41.511 sys 0m2.323s 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.511 ************************************ 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:41.511 END TEST nvmf_discovery_remove_ifc 00:16:41.511 ************************************ 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.511 ************************************ 00:16:41.511 START TEST nvmf_identify_kernel_target 00:16:41.511 ************************************ 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:41.511 * Looking for test storage... 00:16:41.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:41.511 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.770 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.771 --rc genhtml_branch_coverage=1 00:16:41.771 --rc genhtml_function_coverage=1 00:16:41.771 --rc genhtml_legend=1 00:16:41.771 --rc geninfo_all_blocks=1 00:16:41.771 --rc geninfo_unexecuted_blocks=1 00:16:41.771 00:16:41.771 ' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.771 --rc genhtml_branch_coverage=1 00:16:41.771 --rc genhtml_function_coverage=1 00:16:41.771 --rc genhtml_legend=1 00:16:41.771 --rc geninfo_all_blocks=1 00:16:41.771 --rc geninfo_unexecuted_blocks=1 00:16:41.771 00:16:41.771 ' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.771 --rc genhtml_branch_coverage=1 00:16:41.771 --rc genhtml_function_coverage=1 00:16:41.771 --rc genhtml_legend=1 00:16:41.771 --rc geninfo_all_blocks=1 00:16:41.771 --rc geninfo_unexecuted_blocks=1 00:16:41.771 00:16:41.771 ' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.771 --rc genhtml_branch_coverage=1 00:16:41.771 --rc genhtml_function_coverage=1 00:16:41.771 --rc genhtml_legend=1 00:16:41.771 --rc geninfo_all_blocks=1 00:16:41.771 --rc geninfo_unexecuted_blocks=1 00:16:41.771 00:16:41.771 ' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.771 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.771 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:41.772 Cannot find device "nvmf_init_br" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:41.772 Cannot find device "nvmf_init_br2" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:41.772 Cannot find device "nvmf_tgt_br" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.772 Cannot find device "nvmf_tgt_br2" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:41.772 Cannot find device "nvmf_init_br" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:41.772 Cannot find device "nvmf_init_br2" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:41.772 Cannot find device "nvmf_tgt_br" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:41.772 Cannot find device "nvmf_tgt_br2" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:41.772 Cannot find device "nvmf_br" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:41.772 Cannot find device "nvmf_init_if" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:41.772 Cannot find device "nvmf_init_if2" 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.772 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:42.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:42.030 00:16:42.030 --- 10.0.0.3 ping statistics --- 00:16:42.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.030 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:42.030 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:42.030 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:42.030 00:16:42.030 --- 10.0.0.4 ping statistics --- 00:16:42.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.030 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:42.030 00:16:42.030 --- 10.0.0.1 ping statistics --- 00:16:42.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.030 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:42.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:42.030 00:16:42.030 --- 10.0.0.2 ping statistics --- 00:16:42.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.030 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:42.030 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:42.289 08:50:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.547 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.547 Waiting for block devices as requested 00:16:42.547 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.806 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:42.806 No valid GPT data, bailing 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:42.806 No valid GPT data, bailing 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:42.806 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:43.065 No valid GPT data, bailing 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:43.065 No valid GPT data, bailing 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -a 10.0.0.1 -t tcp -s 4420 00:16:43.065 00:16:43.065 Discovery Log Number of Records 2, Generation counter 2 00:16:43.065 =====Discovery Log Entry 0====== 00:16:43.065 trtype: tcp 00:16:43.065 adrfam: ipv4 00:16:43.065 subtype: current discovery subsystem 00:16:43.065 treq: not specified, sq flow control disable supported 00:16:43.065 portid: 1 00:16:43.065 trsvcid: 4420 00:16:43.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:43.065 traddr: 10.0.0.1 00:16:43.065 eflags: none 00:16:43.065 sectype: none 00:16:43.065 =====Discovery Log Entry 1====== 00:16:43.065 trtype: tcp 00:16:43.065 adrfam: ipv4 00:16:43.065 subtype: nvme subsystem 00:16:43.065 treq: not specified, sq flow control disable supported 00:16:43.065 portid: 1 00:16:43.065 trsvcid: 4420 00:16:43.065 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:43.065 traddr: 10.0.0.1 00:16:43.065 eflags: none 00:16:43.065 sectype: none 00:16:43.065 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:43.065 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:43.324 ===================================================== 00:16:43.324 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:43.324 ===================================================== 00:16:43.324 Controller Capabilities/Features 00:16:43.324 ================================ 00:16:43.324 Vendor ID: 0000 00:16:43.324 Subsystem Vendor ID: 0000 00:16:43.324 Serial Number: 9ac7f5b83ad71d5f65e1 00:16:43.324 Model Number: Linux 00:16:43.324 Firmware Version: 6.8.9-20 00:16:43.324 Recommended Arb Burst: 0 00:16:43.324 IEEE OUI Identifier: 00 00 00 00:16:43.324 Multi-path I/O 00:16:43.324 May have multiple subsystem ports: No 00:16:43.324 May have multiple controllers: No 00:16:43.324 Associated with SR-IOV VF: No 00:16:43.324 Max Data Transfer Size: Unlimited 00:16:43.324 Max Number of Namespaces: 0 00:16:43.324 Max Number of I/O Queues: 1024 00:16:43.324 NVMe Specification Version (VS): 1.3 00:16:43.324 NVMe Specification Version (Identify): 1.3 00:16:43.324 Maximum Queue Entries: 1024 00:16:43.324 Contiguous Queues Required: No 00:16:43.324 Arbitration Mechanisms Supported 00:16:43.324 Weighted Round Robin: Not Supported 00:16:43.324 Vendor Specific: Not Supported 00:16:43.324 Reset Timeout: 7500 ms 00:16:43.324 Doorbell Stride: 4 bytes 00:16:43.324 NVM Subsystem Reset: Not Supported 00:16:43.324 Command Sets Supported 00:16:43.324 NVM Command Set: Supported 00:16:43.324 Boot Partition: Not Supported 00:16:43.324 Memory Page Size Minimum: 4096 bytes 00:16:43.324 Memory Page Size Maximum: 4096 bytes 00:16:43.324 Persistent Memory Region: Not Supported 00:16:43.324 Optional Asynchronous Events Supported 00:16:43.324 Namespace Attribute Notices: Not Supported 00:16:43.324 Firmware Activation Notices: Not Supported 00:16:43.324 ANA Change Notices: Not Supported 00:16:43.324 PLE Aggregate Log Change Notices: Not Supported 00:16:43.324 LBA Status Info Alert Notices: Not Supported 00:16:43.324 EGE Aggregate Log Change Notices: Not Supported 00:16:43.324 Normal NVM Subsystem Shutdown event: Not Supported 00:16:43.324 Zone Descriptor Change Notices: Not Supported 00:16:43.324 Discovery Log Change Notices: Supported 00:16:43.324 Controller Attributes 00:16:43.324 128-bit Host Identifier: Not Supported 00:16:43.324 Non-Operational Permissive Mode: Not Supported 00:16:43.324 NVM Sets: Not Supported 00:16:43.324 Read Recovery Levels: Not Supported 00:16:43.324 Endurance Groups: Not Supported 00:16:43.324 Predictable Latency Mode: Not Supported 00:16:43.324 Traffic Based Keep ALive: Not Supported 00:16:43.324 Namespace Granularity: Not Supported 00:16:43.324 SQ Associations: Not Supported 00:16:43.324 UUID List: Not Supported 00:16:43.324 Multi-Domain Subsystem: Not Supported 00:16:43.324 Fixed Capacity Management: Not Supported 00:16:43.324 Variable Capacity Management: Not Supported 00:16:43.324 Delete Endurance Group: Not Supported 00:16:43.324 Delete NVM Set: Not Supported 00:16:43.324 Extended LBA Formats Supported: Not Supported 00:16:43.324 Flexible Data Placement Supported: Not Supported 00:16:43.324 00:16:43.324 Controller Memory Buffer Support 00:16:43.324 ================================ 00:16:43.324 Supported: No 00:16:43.324 00:16:43.324 Persistent Memory Region Support 00:16:43.324 ================================ 00:16:43.324 Supported: No 00:16:43.324 00:16:43.325 Admin Command Set Attributes 00:16:43.325 ============================ 00:16:43.325 Security Send/Receive: Not Supported 00:16:43.325 Format NVM: Not Supported 00:16:43.325 Firmware Activate/Download: Not Supported 00:16:43.325 Namespace Management: Not Supported 00:16:43.325 Device Self-Test: Not Supported 00:16:43.325 Directives: Not Supported 00:16:43.325 NVMe-MI: Not Supported 00:16:43.325 Virtualization Management: Not Supported 00:16:43.325 Doorbell Buffer Config: Not Supported 00:16:43.325 Get LBA Status Capability: Not Supported 00:16:43.325 Command & Feature Lockdown Capability: Not Supported 00:16:43.325 Abort Command Limit: 1 00:16:43.325 Async Event Request Limit: 1 00:16:43.325 Number of Firmware Slots: N/A 00:16:43.325 Firmware Slot 1 Read-Only: N/A 00:16:43.325 Firmware Activation Without Reset: N/A 00:16:43.325 Multiple Update Detection Support: N/A 00:16:43.325 Firmware Update Granularity: No Information Provided 00:16:43.325 Per-Namespace SMART Log: No 00:16:43.325 Asymmetric Namespace Access Log Page: Not Supported 00:16:43.325 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:43.325 Command Effects Log Page: Not Supported 00:16:43.325 Get Log Page Extended Data: Supported 00:16:43.325 Telemetry Log Pages: Not Supported 00:16:43.325 Persistent Event Log Pages: Not Supported 00:16:43.325 Supported Log Pages Log Page: May Support 00:16:43.325 Commands Supported & Effects Log Page: Not Supported 00:16:43.325 Feature Identifiers & Effects Log Page:May Support 00:16:43.325 NVMe-MI Commands & Effects Log Page: May Support 00:16:43.325 Data Area 4 for Telemetry Log: Not Supported 00:16:43.325 Error Log Page Entries Supported: 1 00:16:43.325 Keep Alive: Not Supported 00:16:43.325 00:16:43.325 NVM Command Set Attributes 00:16:43.325 ========================== 00:16:43.325 Submission Queue Entry Size 00:16:43.325 Max: 1 00:16:43.325 Min: 1 00:16:43.325 Completion Queue Entry Size 00:16:43.325 Max: 1 00:16:43.325 Min: 1 00:16:43.325 Number of Namespaces: 0 00:16:43.325 Compare Command: Not Supported 00:16:43.325 Write Uncorrectable Command: Not Supported 00:16:43.325 Dataset Management Command: Not Supported 00:16:43.325 Write Zeroes Command: Not Supported 00:16:43.325 Set Features Save Field: Not Supported 00:16:43.325 Reservations: Not Supported 00:16:43.325 Timestamp: Not Supported 00:16:43.325 Copy: Not Supported 00:16:43.325 Volatile Write Cache: Not Present 00:16:43.325 Atomic Write Unit (Normal): 1 00:16:43.325 Atomic Write Unit (PFail): 1 00:16:43.325 Atomic Compare & Write Unit: 1 00:16:43.325 Fused Compare & Write: Not Supported 00:16:43.325 Scatter-Gather List 00:16:43.325 SGL Command Set: Supported 00:16:43.325 SGL Keyed: Not Supported 00:16:43.325 SGL Bit Bucket Descriptor: Not Supported 00:16:43.325 SGL Metadata Pointer: Not Supported 00:16:43.325 Oversized SGL: Not Supported 00:16:43.325 SGL Metadata Address: Not Supported 00:16:43.325 SGL Offset: Supported 00:16:43.325 Transport SGL Data Block: Not Supported 00:16:43.325 Replay Protected Memory Block: Not Supported 00:16:43.325 00:16:43.325 Firmware Slot Information 00:16:43.325 ========================= 00:16:43.325 Active slot: 0 00:16:43.325 00:16:43.325 00:16:43.325 Error Log 00:16:43.325 ========= 00:16:43.325 00:16:43.325 Active Namespaces 00:16:43.325 ================= 00:16:43.325 Discovery Log Page 00:16:43.325 ================== 00:16:43.325 Generation Counter: 2 00:16:43.325 Number of Records: 2 00:16:43.325 Record Format: 0 00:16:43.325 00:16:43.325 Discovery Log Entry 0 00:16:43.325 ---------------------- 00:16:43.325 Transport Type: 3 (TCP) 00:16:43.325 Address Family: 1 (IPv4) 00:16:43.325 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:43.325 Entry Flags: 00:16:43.325 Duplicate Returned Information: 0 00:16:43.325 Explicit Persistent Connection Support for Discovery: 0 00:16:43.325 Transport Requirements: 00:16:43.325 Secure Channel: Not Specified 00:16:43.325 Port ID: 1 (0x0001) 00:16:43.325 Controller ID: 65535 (0xffff) 00:16:43.325 Admin Max SQ Size: 32 00:16:43.325 Transport Service Identifier: 4420 00:16:43.325 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:43.325 Transport Address: 10.0.0.1 00:16:43.325 Discovery Log Entry 1 00:16:43.325 ---------------------- 00:16:43.325 Transport Type: 3 (TCP) 00:16:43.325 Address Family: 1 (IPv4) 00:16:43.325 Subsystem Type: 2 (NVM Subsystem) 00:16:43.325 Entry Flags: 00:16:43.325 Duplicate Returned Information: 0 00:16:43.325 Explicit Persistent Connection Support for Discovery: 0 00:16:43.325 Transport Requirements: 00:16:43.325 Secure Channel: Not Specified 00:16:43.325 Port ID: 1 (0x0001) 00:16:43.325 Controller ID: 65535 (0xffff) 00:16:43.325 Admin Max SQ Size: 32 00:16:43.325 Transport Service Identifier: 4420 00:16:43.325 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:43.325 Transport Address: 10.0.0.1 00:16:43.325 08:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:43.585 get_feature(0x01) failed 00:16:43.585 get_feature(0x02) failed 00:16:43.585 get_feature(0x04) failed 00:16:43.585 ===================================================== 00:16:43.585 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:43.585 ===================================================== 00:16:43.585 Controller Capabilities/Features 00:16:43.585 ================================ 00:16:43.585 Vendor ID: 0000 00:16:43.585 Subsystem Vendor ID: 0000 00:16:43.585 Serial Number: 466080af2afbba19d1ef 00:16:43.585 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:43.585 Firmware Version: 6.8.9-20 00:16:43.585 Recommended Arb Burst: 6 00:16:43.585 IEEE OUI Identifier: 00 00 00 00:16:43.585 Multi-path I/O 00:16:43.585 May have multiple subsystem ports: Yes 00:16:43.585 May have multiple controllers: Yes 00:16:43.585 Associated with SR-IOV VF: No 00:16:43.585 Max Data Transfer Size: Unlimited 00:16:43.585 Max Number of Namespaces: 1024 00:16:43.585 Max Number of I/O Queues: 128 00:16:43.585 NVMe Specification Version (VS): 1.3 00:16:43.585 NVMe Specification Version (Identify): 1.3 00:16:43.585 Maximum Queue Entries: 1024 00:16:43.585 Contiguous Queues Required: No 00:16:43.585 Arbitration Mechanisms Supported 00:16:43.585 Weighted Round Robin: Not Supported 00:16:43.585 Vendor Specific: Not Supported 00:16:43.585 Reset Timeout: 7500 ms 00:16:43.585 Doorbell Stride: 4 bytes 00:16:43.585 NVM Subsystem Reset: Not Supported 00:16:43.585 Command Sets Supported 00:16:43.585 NVM Command Set: Supported 00:16:43.585 Boot Partition: Not Supported 00:16:43.585 Memory Page Size Minimum: 4096 bytes 00:16:43.585 Memory Page Size Maximum: 4096 bytes 00:16:43.585 Persistent Memory Region: Not Supported 00:16:43.585 Optional Asynchronous Events Supported 00:16:43.585 Namespace Attribute Notices: Supported 00:16:43.585 Firmware Activation Notices: Not Supported 00:16:43.585 ANA Change Notices: Supported 00:16:43.585 PLE Aggregate Log Change Notices: Not Supported 00:16:43.585 LBA Status Info Alert Notices: Not Supported 00:16:43.585 EGE Aggregate Log Change Notices: Not Supported 00:16:43.585 Normal NVM Subsystem Shutdown event: Not Supported 00:16:43.585 Zone Descriptor Change Notices: Not Supported 00:16:43.585 Discovery Log Change Notices: Not Supported 00:16:43.585 Controller Attributes 00:16:43.585 128-bit Host Identifier: Supported 00:16:43.585 Non-Operational Permissive Mode: Not Supported 00:16:43.585 NVM Sets: Not Supported 00:16:43.585 Read Recovery Levels: Not Supported 00:16:43.585 Endurance Groups: Not Supported 00:16:43.585 Predictable Latency Mode: Not Supported 00:16:43.585 Traffic Based Keep ALive: Supported 00:16:43.585 Namespace Granularity: Not Supported 00:16:43.585 SQ Associations: Not Supported 00:16:43.585 UUID List: Not Supported 00:16:43.585 Multi-Domain Subsystem: Not Supported 00:16:43.585 Fixed Capacity Management: Not Supported 00:16:43.585 Variable Capacity Management: Not Supported 00:16:43.585 Delete Endurance Group: Not Supported 00:16:43.585 Delete NVM Set: Not Supported 00:16:43.585 Extended LBA Formats Supported: Not Supported 00:16:43.585 Flexible Data Placement Supported: Not Supported 00:16:43.585 00:16:43.585 Controller Memory Buffer Support 00:16:43.585 ================================ 00:16:43.585 Supported: No 00:16:43.585 00:16:43.585 Persistent Memory Region Support 00:16:43.585 ================================ 00:16:43.585 Supported: No 00:16:43.585 00:16:43.585 Admin Command Set Attributes 00:16:43.585 ============================ 00:16:43.585 Security Send/Receive: Not Supported 00:16:43.585 Format NVM: Not Supported 00:16:43.585 Firmware Activate/Download: Not Supported 00:16:43.585 Namespace Management: Not Supported 00:16:43.585 Device Self-Test: Not Supported 00:16:43.585 Directives: Not Supported 00:16:43.585 NVMe-MI: Not Supported 00:16:43.585 Virtualization Management: Not Supported 00:16:43.585 Doorbell Buffer Config: Not Supported 00:16:43.585 Get LBA Status Capability: Not Supported 00:16:43.585 Command & Feature Lockdown Capability: Not Supported 00:16:43.585 Abort Command Limit: 4 00:16:43.585 Async Event Request Limit: 4 00:16:43.585 Number of Firmware Slots: N/A 00:16:43.585 Firmware Slot 1 Read-Only: N/A 00:16:43.585 Firmware Activation Without Reset: N/A 00:16:43.585 Multiple Update Detection Support: N/A 00:16:43.585 Firmware Update Granularity: No Information Provided 00:16:43.585 Per-Namespace SMART Log: Yes 00:16:43.585 Asymmetric Namespace Access Log Page: Supported 00:16:43.585 ANA Transition Time : 10 sec 00:16:43.585 00:16:43.585 Asymmetric Namespace Access Capabilities 00:16:43.585 ANA Optimized State : Supported 00:16:43.585 ANA Non-Optimized State : Supported 00:16:43.585 ANA Inaccessible State : Supported 00:16:43.585 ANA Persistent Loss State : Supported 00:16:43.585 ANA Change State : Supported 00:16:43.585 ANAGRPID is not changed : No 00:16:43.585 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:43.585 00:16:43.585 ANA Group Identifier Maximum : 128 00:16:43.585 Number of ANA Group Identifiers : 128 00:16:43.585 Max Number of Allowed Namespaces : 1024 00:16:43.585 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:43.585 Command Effects Log Page: Supported 00:16:43.585 Get Log Page Extended Data: Supported 00:16:43.585 Telemetry Log Pages: Not Supported 00:16:43.585 Persistent Event Log Pages: Not Supported 00:16:43.585 Supported Log Pages Log Page: May Support 00:16:43.585 Commands Supported & Effects Log Page: Not Supported 00:16:43.585 Feature Identifiers & Effects Log Page:May Support 00:16:43.585 NVMe-MI Commands & Effects Log Page: May Support 00:16:43.585 Data Area 4 for Telemetry Log: Not Supported 00:16:43.585 Error Log Page Entries Supported: 128 00:16:43.585 Keep Alive: Supported 00:16:43.585 Keep Alive Granularity: 1000 ms 00:16:43.585 00:16:43.585 NVM Command Set Attributes 00:16:43.585 ========================== 00:16:43.585 Submission Queue Entry Size 00:16:43.585 Max: 64 00:16:43.585 Min: 64 00:16:43.585 Completion Queue Entry Size 00:16:43.585 Max: 16 00:16:43.585 Min: 16 00:16:43.585 Number of Namespaces: 1024 00:16:43.585 Compare Command: Not Supported 00:16:43.585 Write Uncorrectable Command: Not Supported 00:16:43.585 Dataset Management Command: Supported 00:16:43.585 Write Zeroes Command: Supported 00:16:43.585 Set Features Save Field: Not Supported 00:16:43.585 Reservations: Not Supported 00:16:43.585 Timestamp: Not Supported 00:16:43.585 Copy: Not Supported 00:16:43.585 Volatile Write Cache: Present 00:16:43.585 Atomic Write Unit (Normal): 1 00:16:43.585 Atomic Write Unit (PFail): 1 00:16:43.585 Atomic Compare & Write Unit: 1 00:16:43.585 Fused Compare & Write: Not Supported 00:16:43.585 Scatter-Gather List 00:16:43.585 SGL Command Set: Supported 00:16:43.585 SGL Keyed: Not Supported 00:16:43.585 SGL Bit Bucket Descriptor: Not Supported 00:16:43.585 SGL Metadata Pointer: Not Supported 00:16:43.585 Oversized SGL: Not Supported 00:16:43.585 SGL Metadata Address: Not Supported 00:16:43.585 SGL Offset: Supported 00:16:43.585 Transport SGL Data Block: Not Supported 00:16:43.585 Replay Protected Memory Block: Not Supported 00:16:43.585 00:16:43.585 Firmware Slot Information 00:16:43.585 ========================= 00:16:43.585 Active slot: 0 00:16:43.585 00:16:43.585 Asymmetric Namespace Access 00:16:43.585 =========================== 00:16:43.585 Change Count : 0 00:16:43.585 Number of ANA Group Descriptors : 1 00:16:43.585 ANA Group Descriptor : 0 00:16:43.585 ANA Group ID : 1 00:16:43.585 Number of NSID Values : 1 00:16:43.586 Change Count : 0 00:16:43.586 ANA State : 1 00:16:43.586 Namespace Identifier : 1 00:16:43.586 00:16:43.586 Commands Supported and Effects 00:16:43.586 ============================== 00:16:43.586 Admin Commands 00:16:43.586 -------------- 00:16:43.586 Get Log Page (02h): Supported 00:16:43.586 Identify (06h): Supported 00:16:43.586 Abort (08h): Supported 00:16:43.586 Set Features (09h): Supported 00:16:43.586 Get Features (0Ah): Supported 00:16:43.586 Asynchronous Event Request (0Ch): Supported 00:16:43.586 Keep Alive (18h): Supported 00:16:43.586 I/O Commands 00:16:43.586 ------------ 00:16:43.586 Flush (00h): Supported 00:16:43.586 Write (01h): Supported LBA-Change 00:16:43.586 Read (02h): Supported 00:16:43.586 Write Zeroes (08h): Supported LBA-Change 00:16:43.586 Dataset Management (09h): Supported 00:16:43.586 00:16:43.586 Error Log 00:16:43.586 ========= 00:16:43.586 Entry: 0 00:16:43.586 Error Count: 0x3 00:16:43.586 Submission Queue Id: 0x0 00:16:43.586 Command Id: 0x5 00:16:43.586 Phase Bit: 0 00:16:43.586 Status Code: 0x2 00:16:43.586 Status Code Type: 0x0 00:16:43.586 Do Not Retry: 1 00:16:43.586 Error Location: 0x28 00:16:43.586 LBA: 0x0 00:16:43.586 Namespace: 0x0 00:16:43.586 Vendor Log Page: 0x0 00:16:43.586 ----------- 00:16:43.586 Entry: 1 00:16:43.586 Error Count: 0x2 00:16:43.586 Submission Queue Id: 0x0 00:16:43.586 Command Id: 0x5 00:16:43.586 Phase Bit: 0 00:16:43.586 Status Code: 0x2 00:16:43.586 Status Code Type: 0x0 00:16:43.586 Do Not Retry: 1 00:16:43.586 Error Location: 0x28 00:16:43.586 LBA: 0x0 00:16:43.586 Namespace: 0x0 00:16:43.586 Vendor Log Page: 0x0 00:16:43.586 ----------- 00:16:43.586 Entry: 2 00:16:43.586 Error Count: 0x1 00:16:43.586 Submission Queue Id: 0x0 00:16:43.586 Command Id: 0x4 00:16:43.586 Phase Bit: 0 00:16:43.586 Status Code: 0x2 00:16:43.586 Status Code Type: 0x0 00:16:43.586 Do Not Retry: 1 00:16:43.586 Error Location: 0x28 00:16:43.586 LBA: 0x0 00:16:43.586 Namespace: 0x0 00:16:43.586 Vendor Log Page: 0x0 00:16:43.586 00:16:43.586 Number of Queues 00:16:43.586 ================ 00:16:43.586 Number of I/O Submission Queues: 128 00:16:43.586 Number of I/O Completion Queues: 128 00:16:43.586 00:16:43.586 ZNS Specific Controller Data 00:16:43.586 ============================ 00:16:43.586 Zone Append Size Limit: 0 00:16:43.586 00:16:43.586 00:16:43.586 Active Namespaces 00:16:43.586 ================= 00:16:43.586 get_feature(0x05) failed 00:16:43.586 Namespace ID:1 00:16:43.586 Command Set Identifier: NVM (00h) 00:16:43.586 Deallocate: Supported 00:16:43.586 Deallocated/Unwritten Error: Not Supported 00:16:43.586 Deallocated Read Value: Unknown 00:16:43.586 Deallocate in Write Zeroes: Not Supported 00:16:43.586 Deallocated Guard Field: 0xFFFF 00:16:43.586 Flush: Supported 00:16:43.586 Reservation: Not Supported 00:16:43.586 Namespace Sharing Capabilities: Multiple Controllers 00:16:43.586 Size (in LBAs): 1310720 (5GiB) 00:16:43.586 Capacity (in LBAs): 1310720 (5GiB) 00:16:43.586 Utilization (in LBAs): 1310720 (5GiB) 00:16:43.586 UUID: 2473813f-789f-4891-a231-17ab170e439b 00:16:43.586 Thin Provisioning: Not Supported 00:16:43.586 Per-NS Atomic Units: Yes 00:16:43.586 Atomic Boundary Size (Normal): 0 00:16:43.586 Atomic Boundary Size (PFail): 0 00:16:43.586 Atomic Boundary Offset: 0 00:16:43.586 NGUID/EUI64 Never Reused: No 00:16:43.586 ANA group ID: 1 00:16:43.586 Namespace Write Protected: No 00:16:43.586 Number of LBA Formats: 1 00:16:43.586 Current LBA Format: LBA Format #00 00:16:43.586 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:43.586 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.586 rmmod nvme_tcp 00:16:43.586 rmmod nvme_fabrics 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:43.586 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:43.845 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:43.846 08:50:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:44.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:44.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.805 ************************************ 00:16:44.805 END TEST nvmf_identify_kernel_target 00:16:44.805 ************************************ 00:16:44.805 00:16:44.805 real 0m3.306s 00:16:44.805 user 0m1.136s 00:16:44.805 sys 0m1.445s 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.805 ************************************ 00:16:44.805 START TEST nvmf_auth_host 00:16:44.805 ************************************ 00:16:44.805 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:44.805 * Looking for test storage... 00:16:45.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:45.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.065 --rc genhtml_branch_coverage=1 00:16:45.065 --rc genhtml_function_coverage=1 00:16:45.065 --rc genhtml_legend=1 00:16:45.065 --rc geninfo_all_blocks=1 00:16:45.065 --rc geninfo_unexecuted_blocks=1 00:16:45.065 00:16:45.065 ' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:45.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.065 --rc genhtml_branch_coverage=1 00:16:45.065 --rc genhtml_function_coverage=1 00:16:45.065 --rc genhtml_legend=1 00:16:45.065 --rc geninfo_all_blocks=1 00:16:45.065 --rc geninfo_unexecuted_blocks=1 00:16:45.065 00:16:45.065 ' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:45.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.065 --rc genhtml_branch_coverage=1 00:16:45.065 --rc genhtml_function_coverage=1 00:16:45.065 --rc genhtml_legend=1 00:16:45.065 --rc geninfo_all_blocks=1 00:16:45.065 --rc geninfo_unexecuted_blocks=1 00:16:45.065 00:16:45.065 ' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:45.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.065 --rc genhtml_branch_coverage=1 00:16:45.065 --rc genhtml_function_coverage=1 00:16:45.065 --rc genhtml_legend=1 00:16:45.065 --rc geninfo_all_blocks=1 00:16:45.065 --rc geninfo_unexecuted_blocks=1 00:16:45.065 00:16:45.065 ' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.065 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:45.066 Cannot find device "nvmf_init_br" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:45.066 Cannot find device "nvmf_init_br2" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:45.066 Cannot find device "nvmf_tgt_br" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.066 Cannot find device "nvmf_tgt_br2" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:45.066 Cannot find device "nvmf_init_br" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:45.066 Cannot find device "nvmf_init_br2" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:45.066 Cannot find device "nvmf_tgt_br" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:45.066 Cannot find device "nvmf_tgt_br2" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:45.066 Cannot find device "nvmf_br" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:45.066 Cannot find device "nvmf_init_if" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:45.066 Cannot find device "nvmf_init_if2" 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:45.066 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:45.326 08:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:45.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:16:45.326 00:16:45.326 --- 10.0.0.3 ping statistics --- 00:16:45.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.326 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:45.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:45.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:16:45.326 00:16:45.326 --- 10.0.0.4 ping statistics --- 00:16:45.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.326 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:45.326 00:16:45.326 --- 10.0.0.1 ping statistics --- 00:16:45.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.326 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:45.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:45.326 00:16:45.326 --- 10.0.0.2 ping statistics --- 00:16:45.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.326 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.326 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.584 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:45.584 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=79030 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 79030 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 79030 ']' 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.585 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee1f72937c4836d661363a62433818f7 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZY8 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee1f72937c4836d661363a62433818f7 0 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee1f72937c4836d661363a62433818f7 0 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee1f72937c4836d661363a62433818f7 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZY8 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZY8 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZY8 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=02e6ef1e342148cd0f403ef2a7998c82c88fe5830882bf9684db2a430c30e1cb 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ojM 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 02e6ef1e342148cd0f403ef2a7998c82c88fe5830882bf9684db2a430c30e1cb 3 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 02e6ef1e342148cd0f403ef2a7998c82c88fe5830882bf9684db2a430c30e1cb 3 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=02e6ef1e342148cd0f403ef2a7998c82c88fe5830882bf9684db2a430c30e1cb 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ojM 00:16:45.843 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ojM 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ojM 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a41cb775bac34f625ba7b5f8e07057af47a9d9e3a0da6d3d 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QG2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a41cb775bac34f625ba7b5f8e07057af47a9d9e3a0da6d3d 0 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a41cb775bac34f625ba7b5f8e07057af47a9d9e3a0da6d3d 0 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a41cb775bac34f625ba7b5f8e07057af47a9d9e3a0da6d3d 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QG2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QG2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QG2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3457affdddb0033319658a4856bba7c27659e9069ed66c4 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cIQ 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3457affdddb0033319658a4856bba7c27659e9069ed66c4 2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3457affdddb0033319658a4856bba7c27659e9069ed66c4 2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3457affdddb0033319658a4856bba7c27659e9069ed66c4 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cIQ 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cIQ 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.cIQ 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c209dd9886e595668da8fb8a0a9ee598 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hV6 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c209dd9886e595668da8fb8a0a9ee598 1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c209dd9886e595668da8fb8a0a9ee598 1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c209dd9886e595668da8fb8a0a9ee598 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hV6 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hV6 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hV6 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad35a28e04c6803de278a9e98d3578bc 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XyF 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad35a28e04c6803de278a9e98d3578bc 1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad35a28e04c6803de278a9e98d3578bc 1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad35a28e04c6803de278a9e98d3578bc 00:16:46.102 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:46.103 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.103 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XyF 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XyF 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.XyF 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1359ea353db11cf0ab1e63fa93fcf9c8a49da2d53ac08bcb 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2Cm 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1359ea353db11cf0ab1e63fa93fcf9c8a49da2d53ac08bcb 2 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1359ea353db11cf0ab1e63fa93fcf9c8a49da2d53ac08bcb 2 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1359ea353db11cf0ab1e63fa93fcf9c8a49da2d53ac08bcb 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2Cm 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2Cm 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2Cm 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccd5530cd9e184cb0edc8657d7ca8dc6 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TfV 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccd5530cd9e184cb0edc8657d7ca8dc6 0 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccd5530cd9e184cb0edc8657d7ca8dc6 0 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccd5530cd9e184cb0edc8657d7ca8dc6 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:46.362 08:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TfV 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TfV 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TfV 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ea06ac1d969a33410ec82bed5b23aae6b235ebc1bb23bfdca8c337148cf4e39d 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tCj 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ea06ac1d969a33410ec82bed5b23aae6b235ebc1bb23bfdca8c337148cf4e39d 3 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ea06ac1d969a33410ec82bed5b23aae6b235ebc1bb23bfdca8c337148cf4e39d 3 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ea06ac1d969a33410ec82bed5b23aae6b235ebc1bb23bfdca8c337148cf4e39d 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tCj 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tCj 00:16:46.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tCj 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 79030 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 79030 ']' 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.362 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZY8 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ojM ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ojM 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QG2 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.cIQ ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cIQ 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hV6 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.XyF ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XyF 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2Cm 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TfV ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TfV 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tCj 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:46.931 08:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:47.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:47.190 Waiting for block devices as requested 00:16:47.190 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:47.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:48.017 No valid GPT data, bailing 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:48.017 No valid GPT data, bailing 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:48.017 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:48.018 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.018 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:48.018 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:48.018 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:48.018 No valid GPT data, bailing 00:16:48.018 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:48.277 No valid GPT data, bailing 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:48.277 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -a 10.0.0.1 -t tcp -s 4420 00:16:48.277 00:16:48.277 Discovery Log Number of Records 2, Generation counter 2 00:16:48.277 =====Discovery Log Entry 0====== 00:16:48.277 trtype: tcp 00:16:48.277 adrfam: ipv4 00:16:48.277 subtype: current discovery subsystem 00:16:48.277 treq: not specified, sq flow control disable supported 00:16:48.277 portid: 1 00:16:48.277 trsvcid: 4420 00:16:48.277 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:48.277 traddr: 10.0.0.1 00:16:48.277 eflags: none 00:16:48.277 sectype: none 00:16:48.277 =====Discovery Log Entry 1====== 00:16:48.278 trtype: tcp 00:16:48.278 adrfam: ipv4 00:16:48.278 subtype: nvme subsystem 00:16:48.278 treq: not specified, sq flow control disable supported 00:16:48.278 portid: 1 00:16:48.278 trsvcid: 4420 00:16:48.278 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:48.278 traddr: 10.0.0.1 00:16:48.278 eflags: none 00:16:48.278 sectype: none 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.278 08:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.537 nvme0n1 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.537 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.538 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.797 nvme0n1 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:48.797 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.798 nvme0n1 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.798 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.057 nvme0n1 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:49.057 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.058 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.317 nvme0n1 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.317 08:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.317 nvme0n1 00:16:49.317 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.317 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.317 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.317 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.317 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.318 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.576 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.836 nvme0n1 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.836 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.096 nvme0n1 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.096 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.097 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.356 08:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.356 nvme0n1 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.356 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.357 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.357 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.616 nvme0n1 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.616 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.875 nvme0n1 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.875 08:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.443 nvme0n1 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.443 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.444 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.702 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.703 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.962 nvme0n1 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.962 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.221 nvme0n1 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.221 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.222 08:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 nvme0n1 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.481 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 nvme0n1 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.740 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.741 08:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.683 nvme0n1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.683 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.251 nvme0n1 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.251 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.252 08:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.511 nvme0n1 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.511 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.770 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.029 nvme0n1 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.029 08:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.287 nvme0n1 00:16:56.287 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.287 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.287 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.287 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.287 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.287 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.546 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.113 nvme0n1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.114 08:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.681 nvme0n1 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.248 nvme0n1 00:16:58.248 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.248 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.248 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.248 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.248 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.248 08:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.507 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.508 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 nvme0n1 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 08:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 nvme0n1 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.643 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.644 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.644 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.644 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.644 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.644 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.644 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 nvme0n1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 nvme0n1 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.903 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.162 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.163 nvme0n1 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.163 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.422 nvme0n1 00:17:00.422 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.422 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.422 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.422 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.422 08:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.422 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.423 nvme0n1 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.423 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 nvme0n1 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.682 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.941 nvme0n1 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.941 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.942 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.201 nvme0n1 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.201 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 nvme0n1 00:17:01.460 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.460 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.460 08:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 nvme0n1 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.719 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.720 nvme0n1 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.720 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.978 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.979 nvme0n1 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.979 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.238 nvme0n1 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.238 08:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.238 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.238 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.238 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.238 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 nvme0n1 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.756 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.757 nvme0n1 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.757 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:03.015 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.016 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.274 nvme0n1 00:17:03.274 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.275 08:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.842 nvme0n1 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.842 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.843 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.102 nvme0n1 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.102 08:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 nvme0n1 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:04.669 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.670 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.929 nvme0n1 00:17:04.929 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.930 08:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.497 nvme0n1 00:17:05.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.498 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.757 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.326 nvme0n1 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.326 08:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 nvme0n1 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 08:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.472 nvme0n1 00:17:07.472 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.472 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.472 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.472 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.472 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.472 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.736 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.737 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.304 nvme0n1 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.304 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.305 08:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.305 nvme0n1 00:17:08.305 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.305 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.305 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.305 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.305 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.305 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 nvme0n1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.824 nvme0n1 00:17:08.824 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.824 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.824 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.824 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.824 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.825 nvme0n1 00:17:08.825 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.084 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 nvme0n1 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.085 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.344 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.344 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.344 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.344 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.344 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.345 nvme0n1 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.345 08:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.345 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.604 nvme0n1 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.604 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.605 nvme0n1 00:17:09.605 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 nvme0n1 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.864 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.124 nvme0n1 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.124 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.125 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.125 08:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.384 nvme0n1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.384 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.643 nvme0n1 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.643 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.902 nvme0n1 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.903 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.162 nvme0n1 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.162 08:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.421 nvme0n1 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.421 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.422 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.990 nvme0n1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.990 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.249 nvme0n1 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.249 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.250 08:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.510 nvme0n1 00:17:12.510 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.769 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.028 nvme0n1 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.028 08:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.596 nvme0n1 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUxZjcyOTM3YzQ4MzZkNjYxMzYzYTYyNDMzODE4ZjckayWz: 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: ]] 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJlNmVmMWUzNDIxNDhjZDBmNDAzZWYyYTc5OThjODJjODhmZTU4MzA4ODJiZjk2ODRkYjJhNDMwYzMwZTFjYgv1PSI=: 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.596 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.597 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.164 nvme0n1 00:17:14.164 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.164 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.164 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.164 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.164 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.165 08:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.732 nvme0n1 00:17:14.732 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.732 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.733 08:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.301 nvme0n1 00:17:15.301 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.301 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.301 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.301 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.301 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.301 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM1OWVhMzUzZGIxMWNmMGFiMWU2M2ZhOTNmY2Y5YzhhNDlkYTJkNTNhYzA4YmNiZJJOiw==: 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2NkNTUzMGNkOWUxODRjYjBlZGM4NjU3ZDdjYThkYzaL/Iai: 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.560 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.128 nvme0n1 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWEwNmFjMWQ5NjlhMzM0MTBlYzgyYmVkNWIyM2FhZTZiMjM1ZWJjMWJiMjNiZmRjYThjMzM3MTQ4Y2Y0ZTM5ZMSQdQc=: 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.128 08:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.698 nvme0n1 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.698 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 request: 00:17:16.699 { 00:17:16.699 "name": "nvme0", 00:17:16.699 "trtype": "tcp", 00:17:16.699 "traddr": "10.0.0.1", 00:17:16.699 "adrfam": "ipv4", 00:17:16.699 "trsvcid": "4420", 00:17:16.699 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.699 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.699 "prchk_reftag": false, 00:17:16.699 "prchk_guard": false, 00:17:16.699 "hdgst": false, 00:17:16.699 "ddgst": false, 00:17:16.699 "allow_unrecognized_csi": false, 00:17:16.699 "method": "bdev_nvme_attach_controller", 00:17:16.699 "req_id": 1 00:17:16.699 } 00:17:16.699 Got JSON-RPC error response 00:17:16.699 response: 00:17:16.699 { 00:17:16.699 "code": -5, 00:17:16.699 "message": "Input/output error" 00:17:16.699 } 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.958 request: 00:17:16.958 { 00:17:16.958 "name": "nvme0", 00:17:16.958 "trtype": "tcp", 00:17:16.958 "traddr": "10.0.0.1", 00:17:16.958 "adrfam": "ipv4", 00:17:16.958 "trsvcid": "4420", 00:17:16.958 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.958 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.958 "prchk_reftag": false, 00:17:16.958 "prchk_guard": false, 00:17:16.958 "hdgst": false, 00:17:16.958 "ddgst": false, 00:17:16.958 "dhchap_key": "key2", 00:17:16.958 "allow_unrecognized_csi": false, 00:17:16.958 "method": "bdev_nvme_attach_controller", 00:17:16.958 "req_id": 1 00:17:16.958 } 00:17:16.958 Got JSON-RPC error response 00:17:16.958 response: 00:17:16.958 { 00:17:16.958 "code": -5, 00:17:16.958 "message": "Input/output error" 00:17:16.958 } 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.958 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.959 request: 00:17:16.959 { 00:17:16.959 "name": "nvme0", 00:17:16.959 "trtype": "tcp", 00:17:16.959 "traddr": "10.0.0.1", 00:17:16.959 "adrfam": "ipv4", 00:17:16.959 "trsvcid": "4420", 00:17:16.959 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.959 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.959 "prchk_reftag": false, 00:17:16.959 "prchk_guard": false, 00:17:16.959 "hdgst": false, 00:17:16.959 "ddgst": false, 00:17:16.959 "dhchap_key": "key1", 00:17:16.959 "dhchap_ctrlr_key": "ckey2", 00:17:16.959 "allow_unrecognized_csi": false, 00:17:16.959 "method": "bdev_nvme_attach_controller", 00:17:16.959 "req_id": 1 00:17:16.959 } 00:17:16.959 Got JSON-RPC error response 00:17:16.959 response: 00:17:16.959 { 00:17:16.959 "code": -5, 00:17:16.959 "message": "Input/output error" 00:17:16.959 } 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.959 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 nvme0n1 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 request: 00:17:17.219 { 00:17:17.219 "name": "nvme0", 00:17:17.219 "dhchap_key": "key1", 00:17:17.219 "dhchap_ctrlr_key": "ckey2", 00:17:17.219 "method": "bdev_nvme_set_keys", 00:17:17.219 "req_id": 1 00:17:17.219 } 00:17:17.219 Got JSON-RPC error response 00:17:17.219 response: 00:17:17.219 { 00:17:17.219 "code": -13, 00:17:17.219 "message": "Permission denied" 00:17:17.219 } 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:17.219 08:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:18.173 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.173 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:18.173 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.173 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.173 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.432 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:18.432 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:18.432 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.432 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTQxY2I3NzViYWMzNGY2MjViYTdiNWY4ZTA3MDU3YWY0N2E5ZDllM2EwZGE2ZDNk8JR5GA==: 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: ]] 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM0NTdhZmZkZGRiMDAzMzMxOTY1OGE0ODU2YmJhN2MyNzY1OWU5MDY5ZWQ2NmM0K4V45g==: 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.433 08:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.433 nvme0n1 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzIwOWRkOTg4NmU1OTU2NjhkYThmYjhhMGE5ZWU1OTg/O/Uy: 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: ]] 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQzNWEyOGUwNGM2ODAzZGUyNzhhOWU5OGQzNTc4YmMxHwqa: 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.433 request: 00:17:18.433 { 00:17:18.433 "name": "nvme0", 00:17:18.433 "dhchap_key": "key2", 00:17:18.433 "dhchap_ctrlr_key": "ckey1", 00:17:18.433 "method": "bdev_nvme_set_keys", 00:17:18.433 "req_id": 1 00:17:18.433 } 00:17:18.433 Got JSON-RPC error response 00:17:18.433 response: 00:17:18.433 { 00:17:18.433 "code": -13, 00:17:18.433 "message": "Permission denied" 00:17:18.433 } 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:18.433 08:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.809 rmmod nvme_tcp 00:17:19.809 rmmod nvme_fabrics 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 79030 ']' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 79030 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 79030 ']' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 79030 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79030 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.809 killing process with pid 79030 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79030' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 79030 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 79030 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:19.809 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:20.070 08:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:20.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:20.893 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:20.893 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:20.893 08:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZY8 /tmp/spdk.key-null.QG2 /tmp/spdk.key-sha256.hV6 /tmp/spdk.key-sha384.2Cm /tmp/spdk.key-sha512.tCj /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:20.893 08:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:21.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:21.411 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.411 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.411 00:17:21.411 real 0m36.461s 00:17:21.411 user 0m33.288s 00:17:21.411 sys 0m3.819s 00:17:21.411 08:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.411 08:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.411 ************************************ 00:17:21.411 END TEST nvmf_auth_host 00:17:21.411 ************************************ 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.411 ************************************ 00:17:21.411 START TEST nvmf_digest 00:17:21.411 ************************************ 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:21.411 * Looking for test storage... 00:17:21.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.411 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.670 --rc genhtml_branch_coverage=1 00:17:21.670 --rc genhtml_function_coverage=1 00:17:21.670 --rc genhtml_legend=1 00:17:21.670 --rc geninfo_all_blocks=1 00:17:21.670 --rc geninfo_unexecuted_blocks=1 00:17:21.670 00:17:21.670 ' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.670 --rc genhtml_branch_coverage=1 00:17:21.670 --rc genhtml_function_coverage=1 00:17:21.670 --rc genhtml_legend=1 00:17:21.670 --rc geninfo_all_blocks=1 00:17:21.670 --rc geninfo_unexecuted_blocks=1 00:17:21.670 00:17:21.670 ' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.670 --rc genhtml_branch_coverage=1 00:17:21.670 --rc genhtml_function_coverage=1 00:17:21.670 --rc genhtml_legend=1 00:17:21.670 --rc geninfo_all_blocks=1 00:17:21.670 --rc geninfo_unexecuted_blocks=1 00:17:21.670 00:17:21.670 ' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.670 --rc genhtml_branch_coverage=1 00:17:21.670 --rc genhtml_function_coverage=1 00:17:21.670 --rc genhtml_legend=1 00:17:21.670 --rc geninfo_all_blocks=1 00:17:21.670 --rc geninfo_unexecuted_blocks=1 00:17:21.670 00:17:21.670 ' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.670 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.671 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:21.671 Cannot find device "nvmf_init_br" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:21.671 Cannot find device "nvmf_init_br2" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:21.671 Cannot find device "nvmf_tgt_br" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.671 Cannot find device "nvmf_tgt_br2" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:21.671 Cannot find device "nvmf_init_br" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:21.671 Cannot find device "nvmf_init_br2" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:21.671 Cannot find device "nvmf_tgt_br" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:21.671 Cannot find device "nvmf_tgt_br2" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:21.671 Cannot find device "nvmf_br" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:21.671 Cannot find device "nvmf_init_if" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:21.671 Cannot find device "nvmf_init_if2" 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:21.671 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:21.930 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:21.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:21.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:17:21.931 00:17:21.931 --- 10.0.0.3 ping statistics --- 00:17:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.931 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:21.931 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:21.931 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:21.931 00:17:21.931 --- 10.0.0.4 ping statistics --- 00:17:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.931 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:21.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:21.931 00:17:21.931 --- 10.0.0.1 ping statistics --- 00:17:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.931 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:21.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:21.931 00:17:21.931 --- 10.0.0.2 ping statistics --- 00:17:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.931 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 ************************************ 00:17:21.931 START TEST nvmf_digest_clean 00:17:21.931 ************************************ 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80669 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80669 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80669 ']' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.931 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 [2024-12-11 08:51:29.690755] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:21.931 [2024-12-11 08:51:29.690850] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.190 [2024-12-11 08:51:29.845269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.190 [2024-12-11 08:51:29.882917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.190 [2024-12-11 08:51:29.882988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.190 [2024-12-11 08:51:29.883006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.190 [2024-12-11 08:51:29.883015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.190 [2024-12-11 08:51:29.883024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.190 [2024-12-11 08:51:29.883424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.190 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.190 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:22.190 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:22.190 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.190 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:22.449 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.449 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:22.449 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:22.449 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:22.449 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.449 08:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:22.449 [2024-12-11 08:51:30.022285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.449 null0 00:17:22.449 [2024-12-11 08:51:30.056487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.449 [2024-12-11 08:51:30.080654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80694 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80694 /var/tmp/bperf.sock 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80694 ']' 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.449 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:22.449 [2024-12-11 08:51:30.146363] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:22.449 [2024-12-11 08:51:30.146477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80694 ] 00:17:22.709 [2024-12-11 08:51:30.298108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.709 [2024-12-11 08:51:30.337781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.709 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.709 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:22.709 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:22.709 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:22.709 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:22.968 [2024-12-11 08:51:30.687562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.968 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:22.968 08:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.536 nvme0n1 00:17:23.536 08:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:23.536 08:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:23.536 Running I/O for 2 seconds... 00:17:25.850 15875.00 IOPS, 62.01 MiB/s [2024-12-11T08:51:33.624Z] 15938.50 IOPS, 62.26 MiB/s 00:17:25.850 Latency(us) 00:17:25.850 [2024-12-11T08:51:33.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.850 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:25.850 nvme0n1 : 2.01 15947.49 62.29 0.00 0.00 8020.22 7357.91 21686.46 00:17:25.850 [2024-12-11T08:51:33.624Z] =================================================================================================================== 00:17:25.850 [2024-12-11T08:51:33.624Z] Total : 15947.49 62.29 0.00 0.00 8020.22 7357.91 21686.46 00:17:25.850 { 00:17:25.850 "results": [ 00:17:25.850 { 00:17:25.850 "job": "nvme0n1", 00:17:25.850 "core_mask": "0x2", 00:17:25.850 "workload": "randread", 00:17:25.850 "status": "finished", 00:17:25.850 "queue_depth": 128, 00:17:25.850 "io_size": 4096, 00:17:25.850 "runtime": 2.006899, 00:17:25.850 "iops": 15947.489136224593, 00:17:25.850 "mibps": 62.294879438377315, 00:17:25.850 "io_failed": 0, 00:17:25.850 "io_timeout": 0, 00:17:25.850 "avg_latency_us": 8020.223378278962, 00:17:25.850 "min_latency_us": 7357.905454545455, 00:17:25.850 "max_latency_us": 21686.458181818183 00:17:25.850 } 00:17:25.850 ], 00:17:25.850 "core_count": 1 00:17:25.850 } 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:25.850 | select(.opcode=="crc32c") 00:17:25.850 | "\(.module_name) \(.executed)"' 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80694 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80694 ']' 00:17:25.850 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80694 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80694 00:17:25.851 killing process with pid 80694 00:17:25.851 Received shutdown signal, test time was about 2.000000 seconds 00:17:25.851 00:17:25.851 Latency(us) 00:17:25.851 [2024-12-11T08:51:33.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.851 [2024-12-11T08:51:33.625Z] =================================================================================================================== 00:17:25.851 [2024-12-11T08:51:33.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80694' 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80694 00:17:25.851 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80694 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80741 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80741 /var/tmp/bperf.sock 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80741 ']' 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:26.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.110 08:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.110 [2024-12-11 08:51:33.759815] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:26.110 [2024-12-11 08:51:33.760104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80741 ] 00:17:26.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.110 Zero copy mechanism will not be used. 00:17:26.369 [2024-12-11 08:51:33.901082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.369 [2024-12-11 08:51:33.935325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.369 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.369 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:26.369 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:26.369 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:26.369 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:26.628 [2024-12-11 08:51:34.261218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:26.628 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.628 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.887 nvme0n1 00:17:26.887 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:26.887 08:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:27.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:27.146 Zero copy mechanism will not be used. 00:17:27.146 Running I/O for 2 seconds... 00:17:29.039 7872.00 IOPS, 984.00 MiB/s [2024-12-11T08:51:36.813Z] 7944.00 IOPS, 993.00 MiB/s 00:17:29.039 Latency(us) 00:17:29.039 [2024-12-11T08:51:36.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.039 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:29.039 nvme0n1 : 2.00 7941.64 992.70 0.00 0.00 2011.57 1765.00 10962.39 00:17:29.039 [2024-12-11T08:51:36.813Z] =================================================================================================================== 00:17:29.039 [2024-12-11T08:51:36.813Z] Total : 7941.64 992.70 0.00 0.00 2011.57 1765.00 10962.39 00:17:29.039 { 00:17:29.039 "results": [ 00:17:29.039 { 00:17:29.039 "job": "nvme0n1", 00:17:29.039 "core_mask": "0x2", 00:17:29.039 "workload": "randread", 00:17:29.039 "status": "finished", 00:17:29.039 "queue_depth": 16, 00:17:29.039 "io_size": 131072, 00:17:29.039 "runtime": 2.00261, 00:17:29.039 "iops": 7941.63616480493, 00:17:29.039 "mibps": 992.7045206006162, 00:17:29.039 "io_failed": 0, 00:17:29.039 "io_timeout": 0, 00:17:29.039 "avg_latency_us": 2011.5674117431868, 00:17:29.039 "min_latency_us": 1765.0036363636364, 00:17:29.039 "max_latency_us": 10962.385454545454 00:17:29.039 } 00:17:29.039 ], 00:17:29.039 "core_count": 1 00:17:29.039 } 00:17:29.039 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:29.039 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:29.039 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:29.039 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:29.039 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:29.039 | select(.opcode=="crc32c") 00:17:29.039 | "\(.module_name) \(.executed)"' 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80741 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80741 ']' 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80741 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.299 08:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80741 00:17:29.299 killing process with pid 80741 00:17:29.299 Received shutdown signal, test time was about 2.000000 seconds 00:17:29.299 00:17:29.299 Latency(us) 00:17:29.299 [2024-12-11T08:51:37.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.299 [2024-12-11T08:51:37.073Z] =================================================================================================================== 00:17:29.299 [2024-12-11T08:51:37.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.299 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.299 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.299 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80741' 00:17:29.299 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80741 00:17:29.299 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80741 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80794 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80794 /var/tmp/bperf.sock 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80794 ']' 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:29.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.559 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:29.559 [2024-12-11 08:51:37.196874] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:29.559 [2024-12-11 08:51:37.197169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80794 ] 00:17:29.818 [2024-12-11 08:51:37.339001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.818 [2024-12-11 08:51:37.370597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.818 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.818 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:29.819 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:29.819 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:29.819 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:30.078 [2024-12-11 08:51:37.738889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.078 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.078 08:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.337 nvme0n1 00:17:30.337 08:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:30.337 08:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:30.596 Running I/O for 2 seconds... 00:17:32.470 17273.00 IOPS, 67.47 MiB/s [2024-12-11T08:51:40.244Z] 17272.50 IOPS, 67.47 MiB/s 00:17:32.470 Latency(us) 00:17:32.470 [2024-12-11T08:51:40.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.470 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.470 nvme0n1 : 2.00 17301.65 67.58 0.00 0.00 7391.97 5272.67 15966.95 00:17:32.470 [2024-12-11T08:51:40.244Z] =================================================================================================================== 00:17:32.470 [2024-12-11T08:51:40.244Z] Total : 17301.65 67.58 0.00 0.00 7391.97 5272.67 15966.95 00:17:32.470 { 00:17:32.470 "results": [ 00:17:32.470 { 00:17:32.470 "job": "nvme0n1", 00:17:32.470 "core_mask": "0x2", 00:17:32.470 "workload": "randwrite", 00:17:32.470 "status": "finished", 00:17:32.470 "queue_depth": 128, 00:17:32.470 "io_size": 4096, 00:17:32.470 "runtime": 2.004028, 00:17:32.470 "iops": 17301.654467901648, 00:17:32.470 "mibps": 67.58458776524081, 00:17:32.470 "io_failed": 0, 00:17:32.470 "io_timeout": 0, 00:17:32.470 "avg_latency_us": 7391.970503850258, 00:17:32.470 "min_latency_us": 5272.669090909091, 00:17:32.470 "max_latency_us": 15966.952727272726 00:17:32.470 } 00:17:32.470 ], 00:17:32.470 "core_count": 1 00:17:32.470 } 00:17:32.729 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:32.729 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:32.729 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:32.729 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:32.729 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:32.729 | select(.opcode=="crc32c") 00:17:32.729 | "\(.module_name) \(.executed)"' 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80794 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80794 ']' 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80794 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.988 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80794 00:17:32.988 killing process with pid 80794 00:17:32.988 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.988 00:17:32.988 Latency(us) 00:17:32.988 [2024-12-11T08:51:40.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.989 [2024-12-11T08:51:40.763Z] =================================================================================================================== 00:17:32.989 [2024-12-11T08:51:40.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80794' 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80794 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80794 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80842 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80842 /var/tmp/bperf.sock 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80842 ']' 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.989 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:32.989 [2024-12-11 08:51:40.748642] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:32.989 [2024-12-11 08:51:40.748931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80842 ] 00:17:32.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:32.989 Zero copy mechanism will not be used. 00:17:33.248 [2024-12-11 08:51:40.890098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.248 [2024-12-11 08:51:40.922027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.248 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.248 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:33.248 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:33.248 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:33.248 08:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:33.507 [2024-12-11 08:51:41.248513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.766 08:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.766 08:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.025 nvme0n1 00:17:34.025 08:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:34.025 08:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:34.025 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:34.025 Zero copy mechanism will not be used. 00:17:34.025 Running I/O for 2 seconds... 00:17:36.341 6624.00 IOPS, 828.00 MiB/s [2024-12-11T08:51:44.115Z] 6674.00 IOPS, 834.25 MiB/s 00:17:36.341 Latency(us) 00:17:36.341 [2024-12-11T08:51:44.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.341 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:36.341 nvme0n1 : 2.00 6672.10 834.01 0.00 0.00 2392.39 2040.55 7864.32 00:17:36.341 [2024-12-11T08:51:44.115Z] =================================================================================================================== 00:17:36.341 [2024-12-11T08:51:44.115Z] Total : 6672.10 834.01 0.00 0.00 2392.39 2040.55 7864.32 00:17:36.341 { 00:17:36.341 "results": [ 00:17:36.341 { 00:17:36.341 "job": "nvme0n1", 00:17:36.341 "core_mask": "0x2", 00:17:36.341 "workload": "randwrite", 00:17:36.341 "status": "finished", 00:17:36.341 "queue_depth": 16, 00:17:36.341 "io_size": 131072, 00:17:36.341 "runtime": 2.002819, 00:17:36.341 "iops": 6672.095681137437, 00:17:36.341 "mibps": 834.0119601421796, 00:17:36.341 "io_failed": 0, 00:17:36.341 "io_timeout": 0, 00:17:36.341 "avg_latency_us": 2392.3935904430823, 00:17:36.341 "min_latency_us": 2040.5527272727272, 00:17:36.341 "max_latency_us": 7864.32 00:17:36.341 } 00:17:36.341 ], 00:17:36.341 "core_count": 1 00:17:36.341 } 00:17:36.341 08:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:36.341 08:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:36.341 08:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:36.341 08:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:36.341 | select(.opcode=="crc32c") 00:17:36.341 | "\(.module_name) \(.executed)"' 00:17:36.341 08:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80842 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80842 ']' 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80842 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80842 00:17:36.341 killing process with pid 80842 00:17:36.341 Received shutdown signal, test time was about 2.000000 seconds 00:17:36.341 00:17:36.341 Latency(us) 00:17:36.341 [2024-12-11T08:51:44.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.341 [2024-12-11T08:51:44.115Z] =================================================================================================================== 00:17:36.341 [2024-12-11T08:51:44.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80842' 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80842 00:17:36.341 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80842 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80669 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80669 ']' 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80669 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80669 00:17:36.600 killing process with pid 80669 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.600 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.601 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80669' 00:17:36.601 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80669 00:17:36.601 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80669 00:17:36.860 00:17:36.860 real 0m14.759s 00:17:36.860 user 0m29.028s 00:17:36.860 sys 0m4.158s 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:36.860 ************************************ 00:17:36.860 END TEST nvmf_digest_clean 00:17:36.860 ************************************ 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:36.860 ************************************ 00:17:36.860 START TEST nvmf_digest_error 00:17:36.860 ************************************ 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80922 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80922 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80922 ']' 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.860 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.860 [2024-12-11 08:51:44.494581] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:36.860 [2024-12-11 08:51:44.494669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.119 [2024-12-11 08:51:44.637907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.119 [2024-12-11 08:51:44.667835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.119 [2024-12-11 08:51:44.668118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.119 [2024-12-11 08:51:44.668275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.119 [2024-12-11 08:51:44.668288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.119 [2024-12-11 08:51:44.668295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.119 [2024-12-11 08:51:44.668641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.119 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.119 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:37.119 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.119 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.119 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.119 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.120 [2024-12-11 08:51:44.797071] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.120 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.120 [2024-12-11 08:51:44.837884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.120 null0 00:17:37.120 [2024-12-11 08:51:44.872258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.379 [2024-12-11 08:51:44.896385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:37.379 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80942 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80942 /var/tmp/bperf.sock 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80942 ']' 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:37.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.380 08:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.380 [2024-12-11 08:51:44.949479] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:37.380 [2024-12-11 08:51:44.949820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80942 ] 00:17:37.380 [2024-12-11 08:51:45.090340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.380 [2024-12-11 08:51:45.121905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.638 [2024-12-11 08:51:45.155099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.638 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.638 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:37.638 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.638 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.958 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:37.958 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.958 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.958 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.958 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.958 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.232 nvme0n1 00:17:38.232 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:38.232 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.232 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.232 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.232 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:38.232 08:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.492 Running I/O for 2 seconds... 00:17:38.492 [2024-12-11 08:51:46.056464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.056516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.056532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.072062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.072329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.072349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.087696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.087898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.087931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.104819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.104874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.104889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.122869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.122908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.122938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.139295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.139513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.139550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.155838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.155875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.155904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.172553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.172590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.172619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.190089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.190130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.190191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.208059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.208096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.208125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.225770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.225810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.225841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.242031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.242068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.242097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.492 [2024-12-11 08:51:46.257653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.492 [2024-12-11 08:51:46.257703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.492 [2024-12-11 08:51:46.257734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.274887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.274924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.274952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.290691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.290728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.290757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.305981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.306017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.306045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.321712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.321748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.321776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.337135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.337198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.337212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.352432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.352468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.352481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.367775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.367978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.368013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.384242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.384295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.384308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.399762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.399798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.399827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.415033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.415275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.415294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.430908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.431125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.431162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.446571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.446609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.446638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.463197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.463236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.463250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.478714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.478908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.478942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.496324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.496363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.752 [2024-12-11 08:51:46.496393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.752 [2024-12-11 08:51:46.514888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:38.752 [2024-12-11 08:51:46.514939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.753 [2024-12-11 08:51:46.514968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.011 [2024-12-11 08:51:46.532524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.532719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.548274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.548311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.548340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.563717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.563929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.563964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.579330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.579369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.579382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.594600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.594794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.594827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.610103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.610355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.610488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.626215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.626450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.626583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.642338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.642560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.642761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.658298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.658516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.658666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.674062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.674291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.674310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.689792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.689860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.689889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.706878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.706916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.706945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.724032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.724246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.724281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.742104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.742168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.742200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.759787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.759960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.759997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-12-11 08:51:46.776396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.012 [2024-12-11 08:51:46.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-12-11 08:51:46.776447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.271 [2024-12-11 08:51:46.793313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.271 [2024-12-11 08:51:46.793526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.271 [2024-12-11 08:51:46.793546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.271 [2024-12-11 08:51:46.809706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.271 [2024-12-11 08:51:46.809744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.271 [2024-12-11 08:51:46.809774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.271 [2024-12-11 08:51:46.826731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.271 [2024-12-11 08:51:46.826768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.271 [2024-12-11 08:51:46.826799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.271 [2024-12-11 08:51:46.842947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.271 [2024-12-11 08:51:46.842998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.271 [2024-12-11 08:51:46.843028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.271 [2024-12-11 08:51:46.859598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.859792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.859827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.875433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.875484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.875513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.890801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.890991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.891025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.906670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.906880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.907037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.922733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.923186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.938698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.938911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.939112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.955418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.955643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.955787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.971231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.971496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.971703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:46.988873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:46.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:46.989260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:47.006354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:47.006572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:47.006800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 [2024-12-11 08:51:47.023635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:47.023850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:47.023902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.272 15434.00 IOPS, 60.29 MiB/s [2024-12-11T08:51:47.046Z] [2024-12-11 08:51:47.040317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.272 [2024-12-11 08:51:47.040379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.272 [2024-12-11 08:51:47.040409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.056502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.056537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.056567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.072298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.072334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.072362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.094215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.094250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.094279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.109410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.109445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.109474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.124810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.124861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.124890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.140355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.140390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.140419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.156189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.156224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.156254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.172346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.172409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.172439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.188052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.188279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.188316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.204978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.205034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.205065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.223917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.224205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.224226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.242803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.242900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.242915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.261180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.261229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.261259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.278122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.278185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.278199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.532 [2024-12-11 08:51:47.294350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.532 [2024-12-11 08:51:47.294387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.532 [2024-12-11 08:51:47.294415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.791 [2024-12-11 08:51:47.311892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.791 [2024-12-11 08:51:47.312093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.791 [2024-12-11 08:51:47.312128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.328470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.328923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.345206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.345420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.345575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.362372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.362589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.362741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.378812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.379024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.379307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.394904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.395179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.395400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.411125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.411369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.411539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.427244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.427479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.427701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.443409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.443475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.443514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.458783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.458819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.458849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.474249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.474285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.474297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.489546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.489582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.489612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.505081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.505146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.521845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.521930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.521961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.539963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.540186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.540205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.792 [2024-12-11 08:51:47.557097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:39.792 [2024-12-11 08:51:47.557162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.792 [2024-12-11 08:51:47.557178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.573707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.573743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.573773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.589079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.589116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.589146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.604395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.604430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.604460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.619693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.619884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.619918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.635340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.635596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.635752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.651272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.651501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.651645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.667112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.667353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.667579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.683308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.683538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.683755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.699217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.699399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.699592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.715834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.716080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.716234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.733166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.733381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.733540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.749839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.750073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.750214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.765937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.765975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.766005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.781664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.781701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.781731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.797544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.797581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.797611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.052 [2024-12-11 08:51:47.813331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.052 [2024-12-11 08:51:47.813367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.052 [2024-12-11 08:51:47.813396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.830282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.830349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.846145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.846179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.846208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.862889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.862941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.862970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.880742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.880779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.880809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.897995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.898061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.914564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.914601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.914631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.931192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.931233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.311 [2024-12-11 08:51:47.931247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.311 [2024-12-11 08:51:47.947468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.311 [2024-12-11 08:51:47.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.312 [2024-12-11 08:51:47.947533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.312 [2024-12-11 08:51:47.964468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.312 [2024-12-11 08:51:47.964521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.312 [2024-12-11 08:51:47.964550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.312 [2024-12-11 08:51:47.981240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.312 [2024-12-11 08:51:47.981277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.312 [2024-12-11 08:51:47.981306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.312 [2024-12-11 08:51:47.997961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.312 [2024-12-11 08:51:47.997998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.312 [2024-12-11 08:51:47.998011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.312 [2024-12-11 08:51:48.014733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.312 [2024-12-11 08:51:48.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.312 [2024-12-11 08:51:48.014800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.312 [2024-12-11 08:51:48.032020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2471950) 00:17:40.312 [2024-12-11 08:51:48.032225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.312 [2024-12-11 08:51:48.032244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.312 15433.50 IOPS, 60.29 MiB/s 00:17:40.312 Latency(us) 00:17:40.312 [2024-12-11T08:51:48.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.312 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:40.312 nvme0n1 : 2.01 15442.09 60.32 0.00 0.00 8282.93 7357.91 29431.62 00:17:40.312 [2024-12-11T08:51:48.086Z] =================================================================================================================== 00:17:40.312 [2024-12-11T08:51:48.086Z] Total : 15442.09 60.32 0.00 0.00 8282.93 7357.91 29431.62 00:17:40.312 { 00:17:40.312 "results": [ 00:17:40.312 { 00:17:40.312 "job": "nvme0n1", 00:17:40.312 "core_mask": "0x2", 00:17:40.312 "workload": "randread", 00:17:40.312 "status": "finished", 00:17:40.312 "queue_depth": 128, 00:17:40.312 "io_size": 4096, 00:17:40.312 "runtime": 2.007177, 00:17:40.312 "iops": 15442.08607412301, 00:17:40.312 "mibps": 60.32064872704301, 00:17:40.312 "io_failed": 0, 00:17:40.312 "io_timeout": 0, 00:17:40.312 "avg_latency_us": 8282.933855020603, 00:17:40.312 "min_latency_us": 7357.905454545455, 00:17:40.312 "max_latency_us": 29431.62181818182 00:17:40.312 } 00:17:40.312 ], 00:17:40.312 "core_count": 1 00:17:40.312 } 00:17:40.312 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:40.312 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:40.312 | .driver_specific 00:17:40.312 | .nvme_error 00:17:40.312 | .status_code 00:17:40.312 | .command_transient_transport_error' 00:17:40.312 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:40.312 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:40.571 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:17:40.571 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80942 00:17:40.571 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80942 ']' 00:17:40.571 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80942 00:17:40.571 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:40.571 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80942 00:17:40.830 killing process with pid 80942 00:17:40.830 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.830 00:17:40.830 Latency(us) 00:17:40.830 [2024-12-11T08:51:48.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.830 [2024-12-11T08:51:48.604Z] =================================================================================================================== 00:17:40.830 [2024-12-11T08:51:48.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80942' 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80942 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80942 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80995 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80995 /var/tmp/bperf.sock 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80995 ']' 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:40.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.830 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.830 [2024-12-11 08:51:48.550374] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:40.830 [2024-12-11 08:51:48.550636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80995 ] 00:17:40.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:40.830 Zero copy mechanism will not be used. 00:17:41.090 [2024-12-11 08:51:48.691599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.090 [2024-12-11 08:51:48.723689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.090 [2024-12-11 08:51:48.753272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.090 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.090 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:41.090 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.090 08:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.348 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:41.348 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.348 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.348 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.349 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.349 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.916 nvme0n1 00:17:41.917 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:41.917 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.917 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.917 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.917 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:41.917 08:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:41.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:41.917 Zero copy mechanism will not be used. 00:17:41.917 Running I/O for 2 seconds... 00:17:41.917 [2024-12-11 08:51:49.560654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.560725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.560742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.565559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.565599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.565630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.570417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.570456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.570471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.575342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.575411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.575452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.580230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.580327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.580344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.585198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.585267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.585283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.590029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.590067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.590098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.594746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.595008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.595026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.599741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.599781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.599823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.604325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.604361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.604390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.608735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.608773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.608804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.613075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.613142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.617596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.617635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.617665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.621924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.621962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.621991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.626356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.626393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.626437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.630708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.630746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.630775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.634996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.635033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.635104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.639646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.639684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.639713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.643948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.643988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.644018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.648529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.648569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.648584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.653010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.653047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.653076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.657393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.657428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.657457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.661791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.661831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.661861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.666226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.666289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.666320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.917 [2024-12-11 08:51:49.670511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.917 [2024-12-11 08:51:49.670548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.917 [2024-12-11 08:51:49.670578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.918 [2024-12-11 08:51:49.674602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.918 [2024-12-11 08:51:49.674637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.918 [2024-12-11 08:51:49.674666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.918 [2024-12-11 08:51:49.678696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.918 [2024-12-11 08:51:49.678731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.918 [2024-12-11 08:51:49.678761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.918 [2024-12-11 08:51:49.682666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.918 [2024-12-11 08:51:49.682702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.918 [2024-12-11 08:51:49.682730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.918 [2024-12-11 08:51:49.687104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:41.918 [2024-12-11 08:51:49.687186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.918 [2024-12-11 08:51:49.687203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.691806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.692044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.692077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.696412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.696448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.696477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.700482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.700518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.700547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.704568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.704603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.704632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.708642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.708678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.708706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.712710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.712745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.712774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.716750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.716786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.716814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.720820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.720857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.720886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.724960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.724995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.725024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.728954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.728990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.729019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.733023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.733059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.733088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.737088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.737124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.737163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.741077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.741113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.741142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.745076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.745112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.749061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.749097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.749126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.753121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.753180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.753193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.757086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.757122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.757164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.761163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.761198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.761227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.765187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.765222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.765250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.769292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.769328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.769358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.773389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.773425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.773454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.777516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.777551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.777580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.781619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.781655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.781684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.785772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.785808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.789944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.179 [2024-12-11 08:51:49.789980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.179 [2024-12-11 08:51:49.790009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.179 [2024-12-11 08:51:49.794403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.794439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.794468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.798529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.798574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.798604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.803145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.803220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.803236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.807621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.807657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.807687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.812022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.812058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.812087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.816339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.816376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.816389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.820584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.820620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.825218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.825281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.825311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.829736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.829773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.829802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.834202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.834246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.834275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.838345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.838379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.838407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.842452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.842503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.842531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.846460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.846496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.846525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.850448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.850483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.850511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.854382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.854417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.854447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.858317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.858351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.858380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.862278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.862312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.862341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.866228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.866263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.866291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.870216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.870253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.870282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.874212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.874247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.874275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.878206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.878240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.878269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.882134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.882379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.882399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.886405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.886440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.886468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.890407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.890442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.890471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.894330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.894364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.894392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.898322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.898357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.898385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.902331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.902366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.902394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.180 [2024-12-11 08:51:49.906313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.180 [2024-12-11 08:51:49.906347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.180 [2024-12-11 08:51:49.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.910265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.910298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.910327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.914244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.914279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.914307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.918188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.918222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.918251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.922227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.922262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.922290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.926170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.926204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.930138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.930367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.930416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.934416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.934451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.934480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.938426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.938461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.938491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.942428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.942471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.942500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.181 [2024-12-11 08:51:49.946640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.181 [2024-12-11 08:51:49.946681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.181 [2024-12-11 08:51:49.946710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.951173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.951228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.951243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.955516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.955573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.955602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.959818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.959859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.959889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.964038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.964076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.964105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.968355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.968393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.968421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.972424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.972463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.972507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.976574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.443 [2024-12-11 08:51:49.976628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.443 [2024-12-11 08:51:49.976657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.443 [2024-12-11 08:51:49.980676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:49.980714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:49.980743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:49.984869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:49.984919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:49.984947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:49.989003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:49.989039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:49.989067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:49.993060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:49.993096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:49.993123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:49.997080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:49.997116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:49.997144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.001220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.001255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.001283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.005700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.005744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.005775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.010167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.010208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.010222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.014362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.018572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.018611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.018624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.023003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.023041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.023101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.027661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.027715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.027745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.032188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.032269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.032285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.036634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.036832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.036867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.041015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.041051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.041081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.045217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.045252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.045296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.049359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.049395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.049425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.053406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.053442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.053472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.057649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.057685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.057714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.061844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.061881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.061911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.066253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.066291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.066320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.070596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.070632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.070663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.074593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.074629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.074658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.078511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.078547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.078576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.082444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.082480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.444 [2024-12-11 08:51:50.082509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.444 [2024-12-11 08:51:50.086397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.444 [2024-12-11 08:51:50.086433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.086462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.090778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.090814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.090858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.096043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.096255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.096275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.101537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.101603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.101617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.107326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.107362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.107376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.111755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.111817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.115844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.115892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.115919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.119896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.119943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.119955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.124023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.124070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.124081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.128176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.128232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.128244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.132253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.132301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.132312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.136293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.136340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.136352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.140285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.140331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.140343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.144320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.144367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.144378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.148294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.148340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.148352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.152278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.152325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.152336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.156313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.156360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.156372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.160401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.160446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.160458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.164627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.164675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.164687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.168764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.168811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.168823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.173008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.173056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.173068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.177275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.177322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.177333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.181292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.181338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.181349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.185337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.185385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.185396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.189244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.189290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.189301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.193278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.193310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.193321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.197213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.197259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.445 [2024-12-11 08:51:50.197271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.445 [2024-12-11 08:51:50.201160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.445 [2024-12-11 08:51:50.201205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.446 [2024-12-11 08:51:50.201216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.446 [2024-12-11 08:51:50.205098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.446 [2024-12-11 08:51:50.205145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.446 [2024-12-11 08:51:50.205170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.446 [2024-12-11 08:51:50.209207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.446 [2024-12-11 08:51:50.209266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.446 [2024-12-11 08:51:50.209279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.706 [2024-12-11 08:51:50.213420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.706 [2024-12-11 08:51:50.213467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.706 [2024-12-11 08:51:50.213479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.706 [2024-12-11 08:51:50.217552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.706 [2024-12-11 08:51:50.217600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.706 [2024-12-11 08:51:50.217611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.706 [2024-12-11 08:51:50.222087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.706 [2024-12-11 08:51:50.222176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.706 [2024-12-11 08:51:50.222206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.706 [2024-12-11 08:51:50.226455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.706 [2024-12-11 08:51:50.226503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.706 [2024-12-11 08:51:50.226530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.706 [2024-12-11 08:51:50.230982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.706 [2024-12-11 08:51:50.231029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.706 [2024-12-11 08:51:50.231042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.706 [2024-12-11 08:51:50.235387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.706 [2024-12-11 08:51:50.235424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.235438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.239861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.239910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.239923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.244434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.244483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.244510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.248869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.248917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.248929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.253223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.253270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.253282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.257412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.257461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.257473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.261679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.261726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.261738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.266121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.266197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.266210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.270345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.270393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.270406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.274634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.274683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.274695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.278942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.278989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.279001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.283655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.283704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.283716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.288319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.288354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.288368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.292950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.292997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.293009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.297480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.297557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.297583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.301830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.301877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.301889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.306304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.306352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.306364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.310545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.310591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.310603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.314887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.314933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.314944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.318939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.318986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.318998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.323409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.323472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.323509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.328056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.328103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.707 [2024-12-11 08:51:50.328116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.707 [2024-12-11 08:51:50.332735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.707 [2024-12-11 08:51:50.332784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.332796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.337248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.337295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.337307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.341574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.341622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.341635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.345926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.345973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.345986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.350260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.350308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.354584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.354633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.354645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.358738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.358788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.358800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.363147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.363193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.363207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.367269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.367305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.367318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.371425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.371473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.371512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.375931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.375980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.375992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.380299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.380348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.380360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.384493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.384541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.384553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.388893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.388940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.388952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.393111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.393171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.393185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.397249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.397296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.397308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.401762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.401812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.401824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.406018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.406066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.406078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.410267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.410315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.410327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.414357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.414404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.414416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.418446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.418496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.418522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.422675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.422722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.422733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.426812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.426859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.426871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.430854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.430902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.430915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.434982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.435029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.435041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.708 [2024-12-11 08:51:50.439094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.708 [2024-12-11 08:51:50.439128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.708 [2024-12-11 08:51:50.439156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.443191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.443225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.443237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.447188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.447237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.447249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.451324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.451358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.451396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.455413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.455460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.455486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.459540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.459587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.459599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.463573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.463620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.463632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.467669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.467716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.467728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.471828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.471875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.471902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.709 [2024-12-11 08:51:50.476936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.709 [2024-12-11 08:51:50.476970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.709 [2024-12-11 08:51:50.476982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.481595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.481654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.486108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.486182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.486196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.490260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.490308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.490319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.494443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.494490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.494502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.498572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.498619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.498631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.502563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.502610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.502622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.506799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.506846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.506858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.511210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.511245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.511258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.515227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.515276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.515288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.519362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.519442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.519454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.523566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.523625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.527762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.527809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.527820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.531998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.532045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.532057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.536189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.536247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.536260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.540274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.540321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.540332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.544471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.544517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.544529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.548646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.548693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.548705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.552847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.552895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.552907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.970 7254.00 IOPS, 906.75 MiB/s [2024-12-11T08:51:50.744Z] [2024-12-11 08:51:50.558531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.558593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.970 [2024-12-11 08:51:50.558604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.970 [2024-12-11 08:51:50.562705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.970 [2024-12-11 08:51:50.562738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.562750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.566799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.566832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.566843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.570919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.570967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.570980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.574935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.574994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.579129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.579190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.579204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.583600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.583652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.583665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.588013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.588063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.588075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.592705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.592755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.592768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.597392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.597474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.597487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.602239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.602301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.602316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.606810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.606904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.611609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.611660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.611679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.616339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.616385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.616414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.621305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.621355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.621369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.626156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.626215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.626228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.631187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.631232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.631245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.636053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.636103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.636116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.640893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.640956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.640968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.645571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.645608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.645621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.650224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.650285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.650297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.654909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.654973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.654986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.659568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.659605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.659618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.664147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.664206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.664235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.668761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.668813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.668858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.673401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.673449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.673461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.677987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.678036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.678048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.682586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.682637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.971 [2024-12-11 08:51:50.687033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.971 [2024-12-11 08:51:50.687092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.971 [2024-12-11 08:51:50.687105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.691569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.691637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.691650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.696059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.696108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.696120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.700380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.700429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.700441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.704853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.704902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.704914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.709307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.709355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.709368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.713773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.713823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.713836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.718158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.718218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.718231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.722521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.722585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.722598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.727034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.727109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.731524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.731573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.731586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.735949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.735996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.736008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.972 [2024-12-11 08:51:50.740540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:42.972 [2024-12-11 08:51:50.740588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.972 [2024-12-11 08:51:50.740601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.232 [2024-12-11 08:51:50.745010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.232 [2024-12-11 08:51:50.745057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.232 [2024-12-11 08:51:50.745069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.232 [2024-12-11 08:51:50.749859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.232 [2024-12-11 08:51:50.749907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.232 [2024-12-11 08:51:50.749919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.232 [2024-12-11 08:51:50.754109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.232 [2024-12-11 08:51:50.754184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.754197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.758357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.758405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.758417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.762527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.762590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.762602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.766734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.766782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.766793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.771116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.771193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.771207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.775363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.775426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.779632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.779681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.779692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.784109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.784167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.784180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.788362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.788408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.788420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.792658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.792706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.792718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.796999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.797047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.797059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.801206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.801253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.801265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.805271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.805317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.805328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.809312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.809358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.809369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.813380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.813426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.813438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.817481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.817528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.817540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.821811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.821858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.821870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.826077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.826125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.826138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.830411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.830460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.830473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.834633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.834680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.834692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.838868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.838916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.838928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.843053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.843134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.843159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.847316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.847350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.851649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.851698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.851710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.855922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.855975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.855987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.860127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.860185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.860198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.864292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.864339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.233 [2024-12-11 08:51:50.864351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.233 [2024-12-11 08:51:50.868477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.233 [2024-12-11 08:51:50.868542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.868554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.872831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.872880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.872907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.877127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.877184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.877197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.881560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.881607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.881619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.885843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.885892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.885904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.890459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.890508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.890520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.895005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.895077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.895106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.899596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.899645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.899657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.903895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.903944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.903956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.908165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.908223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.908236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.912282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.912331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.912343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.916509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.916543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.916570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.920768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.920818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.920830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.925130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.925187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.925200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.929436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.929485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.929498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.933754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.933802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.933813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.937988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.938036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.938049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.942263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.942311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.942323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.946381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.946429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.946442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.950765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.950814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.955489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.955553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.955566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.959877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.959937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.964242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.964290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.964302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.968527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.972711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.972763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.972774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.976874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.976926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.976938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.981059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.981112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.981124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.985114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.234 [2024-12-11 08:51:50.985174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.234 [2024-12-11 08:51:50.985186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.234 [2024-12-11 08:51:50.989260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.235 [2024-12-11 08:51:50.989308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.235 [2024-12-11 08:51:50.989319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.235 [2024-12-11 08:51:50.993435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.235 [2024-12-11 08:51:50.993484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.235 [2024-12-11 08:51:50.993512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.235 [2024-12-11 08:51:50.997546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.235 [2024-12-11 08:51:50.997597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.235 [2024-12-11 08:51:50.997609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.235 [2024-12-11 08:51:51.002111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.235 [2024-12-11 08:51:51.002174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.235 [2024-12-11 08:51:51.002187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.006469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.006516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.006529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.010986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.011033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.015081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.015157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.015173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.019185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.019219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.019232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.023349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.023384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.023397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.027515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.027562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.027573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.031657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.031706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.031718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.035803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.035850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.035862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.039966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.040014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.040025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.044173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.044243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.048261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.048307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.052308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.052354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.052366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.056385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.056431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.056443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.060444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.060491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.060503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.064549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.064596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.064608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.068657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.068704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.068716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.072847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.072894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.072907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.076951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.076998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.077009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.081120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.081176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.081188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.085210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.085256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.089127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.089183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.089195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.093174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.093220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.093231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.097237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.097283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.097295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.101319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.101365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.101377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.105356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.105404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.105417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.109386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.494 [2024-12-11 08:51:51.109419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.494 [2024-12-11 08:51:51.109430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.494 [2024-12-11 08:51:51.113595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.113628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.117773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.117806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.117818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.121988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.122036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.122048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.126133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.126203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.130376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.130423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.130435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.134373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.134419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.134431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.138436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.138483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.138495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.142473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.142521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.142533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.146546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.146594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.146605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.150976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.151009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.151036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.155596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.155630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.155641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.159826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.159873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.159885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.164081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.164128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.164140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.168271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.168317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.168329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.172323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.172370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.172381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.176495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.176542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.176554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.180660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.180707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.180719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.184761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.184808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.184820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.188830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.188877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.188890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.192922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.192969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.192980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.197030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.197078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.197090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.201296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.201344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.201355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.205398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.205446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.205457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.209587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.209636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.209649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.213707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.213755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.213767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.217913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.217961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.217974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.222118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.495 [2024-12-11 08:51:51.222173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.495 [2024-12-11 08:51:51.222185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.495 [2024-12-11 08:51:51.226139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.226196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.226208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.230224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.230271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.230283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.234264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.234309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.234321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.238321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.238367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.238378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.242316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.242362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.242374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.246337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.246383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.246395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.250330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.250376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.250388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.254309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.254355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.254366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.258313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.258359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.258370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.496 [2024-12-11 08:51:51.262542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.496 [2024-12-11 08:51:51.262575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.496 [2024-12-11 08:51:51.262586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.267033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.267101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.267115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.271488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.271521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.271533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.275667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.275714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.275725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.279827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.279873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.279886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.284105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.284162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.284175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.288158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.288215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.288228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.292267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.292313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.292324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.296743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.296791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.296803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.301095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.301142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.301182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.305722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.305771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.305784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.310350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.310400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.310414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.315247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.315283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.315296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.319855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.319903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.319914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.324357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.324407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.324421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.328836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.328883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.328895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.333300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.333349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.333362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.337735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.337783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.337795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.342197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.342256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.342270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.346549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.346596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.346622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.350719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.350766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.350777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.354886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.354934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.354945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.359175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.359211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.757 [2024-12-11 08:51:51.359224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.757 [2024-12-11 08:51:51.363299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.757 [2024-12-11 08:51:51.363333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.363346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.367350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.367415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.367440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.371528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.371575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.371587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.375687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.375735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.375746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.379889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.379936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.379947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.384096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.384144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.384199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.388269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.388315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.392409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.392457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.392468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.396492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.396540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.396566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.400577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.400623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.404893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.404941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.409619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.409668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.409681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.414157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.414229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.418598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.418630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.418642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.423214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.423262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.427912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.427960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.427971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.432596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.432656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.436945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.436992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.437004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.441290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.441338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.445616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.445664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.445676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.450024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.450073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.450085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.454808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.454845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.454858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.459316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.459352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.459365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.463861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.463910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.463922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.468257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.468305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.468317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.472491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.472539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.472552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.476705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.476753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.476765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.481189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.758 [2024-12-11 08:51:51.481239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.758 [2024-12-11 08:51:51.481252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.758 [2024-12-11 08:51:51.485492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.485557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.485570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.489734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.489783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.489795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.494072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.494121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.494134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.498400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.498448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.498460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.502649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.502697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.502710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.506896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.506943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.506955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.511229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.511262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.511275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.515557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.515607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.515620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.520044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.520095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.520108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.759 [2024-12-11 08:51:51.524664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:43.759 [2024-12-11 08:51:51.524713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.759 [2024-12-11 08:51:51.524725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:44.018 [2024-12-11 08:51:51.529263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.018 [2024-12-11 08:51:51.529311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.018 [2024-12-11 08:51:51.529323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:44.018 [2024-12-11 08:51:51.533704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.018 [2024-12-11 08:51:51.533739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.018 [2024-12-11 08:51:51.533753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:44.018 [2024-12-11 08:51:51.538023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.018 [2024-12-11 08:51:51.538072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.018 [2024-12-11 08:51:51.538084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:44.018 [2024-12-11 08:51:51.542252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.018 [2024-12-11 08:51:51.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.018 [2024-12-11 08:51:51.542313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:44.018 [2024-12-11 08:51:51.546647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.018 [2024-12-11 08:51:51.546696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.018 [2024-12-11 08:51:51.546708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:44.019 [2024-12-11 08:51:51.550821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.019 [2024-12-11 08:51:51.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.019 [2024-12-11 08:51:51.550882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:44.019 [2024-12-11 08:51:51.555220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf21800) 00:17:44.019 [2024-12-11 08:51:51.555255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.019 [2024-12-11 08:51:51.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:44.019 7215.00 IOPS, 901.88 MiB/s 00:17:44.019 Latency(us) 00:17:44.019 [2024-12-11T08:51:51.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:44.019 nvme0n1 : 2.00 7215.99 902.00 0.00 0.00 2213.74 1817.13 6345.08 00:17:44.019 [2024-12-11T08:51:51.793Z] =================================================================================================================== 00:17:44.019 [2024-12-11T08:51:51.793Z] Total : 7215.99 902.00 0.00 0.00 2213.74 1817.13 6345.08 00:17:44.019 { 00:17:44.019 "results": [ 00:17:44.019 { 00:17:44.019 "job": "nvme0n1", 00:17:44.019 "core_mask": "0x2", 00:17:44.019 "workload": "randread", 00:17:44.019 "status": "finished", 00:17:44.019 "queue_depth": 16, 00:17:44.019 "io_size": 131072, 00:17:44.019 "runtime": 2.001944, 00:17:44.019 "iops": 7215.986061548175, 00:17:44.019 "mibps": 901.9982576935219, 00:17:44.019 "io_failed": 0, 00:17:44.019 "io_timeout": 0, 00:17:44.019 "avg_latency_us": 2213.7422868865874, 00:17:44.019 "min_latency_us": 1817.1345454545456, 00:17:44.019 "max_latency_us": 6345.076363636364 00:17:44.019 } 00:17:44.019 ], 00:17:44.019 "core_count": 1 00:17:44.019 } 00:17:44.019 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:44.019 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:44.019 | .driver_specific 00:17:44.019 | .nvme_error 00:17:44.019 | .status_code 00:17:44.019 | .command_transient_transport_error' 00:17:44.019 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:44.019 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 466 > 0 )) 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80995 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80995 ']' 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80995 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80995 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:44.278 killing process with pid 80995 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80995' 00:17:44.278 Received shutdown signal, test time was about 2.000000 seconds 00:17:44.278 00:17:44.278 Latency(us) 00:17:44.278 [2024-12-11T08:51:52.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.278 [2024-12-11T08:51:52.052Z] =================================================================================================================== 00:17:44.278 [2024-12-11T08:51:52.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80995 00:17:44.278 08:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80995 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81042 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81042 /var/tmp/bperf.sock 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81042 ']' 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:44.537 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:44.538 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.538 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:44.538 [2024-12-11 08:51:52.096032] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:44.538 [2024-12-11 08:51:52.096120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81042 ] 00:17:44.538 [2024-12-11 08:51:52.234315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.538 [2024-12-11 08:51:52.265116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.538 [2024-12-11 08:51:52.294140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.797 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.797 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:44.797 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:44.797 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.056 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:45.056 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.056 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.056 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.056 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.056 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.314 nvme0n1 00:17:45.314 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:45.314 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.315 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.315 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.315 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:45.315 08:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:45.574 Running I/O for 2 seconds... 00:17:45.574 [2024-12-11 08:51:53.105389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef7100 00:17:45.574 [2024-12-11 08:51:53.106967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.107021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.120201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef7970 00:17:45.574 [2024-12-11 08:51:53.121705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.121754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.134664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef81e0 00:17:45.574 [2024-12-11 08:51:53.136294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.136339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.149226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef8a50 00:17:45.574 [2024-12-11 08:51:53.150719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.150765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.163919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef92c0 00:17:45.574 [2024-12-11 08:51:53.165409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.165456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.179327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef9b30 00:17:45.574 [2024-12-11 08:51:53.181041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.181108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.195780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efa3a0 00:17:45.574 [2024-12-11 08:51:53.197335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.197381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.210534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efac10 00:17:45.574 [2024-12-11 08:51:53.212040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.212086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.225316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efb480 00:17:45.574 [2024-12-11 08:51:53.226747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.226792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.239950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efbcf0 00:17:45.574 [2024-12-11 08:51:53.241430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.241473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.254432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efc560 00:17:45.574 [2024-12-11 08:51:53.255891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.255936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.269017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efcdd0 00:17:45.574 [2024-12-11 08:51:53.270432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.270476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.283918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efd640 00:17:45.574 [2024-12-11 08:51:53.285425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.285485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.299007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efdeb0 00:17:45.574 [2024-12-11 08:51:53.300449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.300494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.313554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efe720 00:17:45.574 [2024-12-11 08:51:53.314837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.314881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:45.574 [2024-12-11 08:51:53.328005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eff3c8 00:17:45.574 [2024-12-11 08:51:53.329337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.574 [2024-12-11 08:51:53.329382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:45.833 [2024-12-11 08:51:53.350175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eff3c8 00:17:45.833 [2024-12-11 08:51:53.352973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.833 [2024-12-11 08:51:53.353019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.833 [2024-12-11 08:51:53.367222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efe720 00:17:45.833 [2024-12-11 08:51:53.369879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.833 [2024-12-11 08:51:53.369923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:45.833 [2024-12-11 08:51:53.383131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efdeb0 00:17:45.833 [2024-12-11 08:51:53.385759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.833 [2024-12-11 08:51:53.385803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:45.833 [2024-12-11 08:51:53.398993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efd640 00:17:45.833 [2024-12-11 08:51:53.401590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.833 [2024-12-11 08:51:53.401634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.414034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efcdd0 00:17:45.834 [2024-12-11 08:51:53.416506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.416549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.428727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efc560 00:17:45.834 [2024-12-11 08:51:53.431082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.431128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.443349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efbcf0 00:17:45.834 [2024-12-11 08:51:53.445659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.445704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.457959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efb480 00:17:45.834 [2024-12-11 08:51:53.460362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.460406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.472407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efac10 00:17:45.834 [2024-12-11 08:51:53.474628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.474673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.486888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016efa3a0 00:17:45.834 [2024-12-11 08:51:53.489248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.489293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.501550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef9b30 00:17:45.834 [2024-12-11 08:51:53.503860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.503904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.516332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef92c0 00:17:45.834 [2024-12-11 08:51:53.518510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.518554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.530851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef8a50 00:17:45.834 [2024-12-11 08:51:53.533083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.533127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.545501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef81e0 00:17:45.834 [2024-12-11 08:51:53.547775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.547819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.560274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef7970 00:17:45.834 [2024-12-11 08:51:53.562404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.562433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.574749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef7100 00:17:45.834 [2024-12-11 08:51:53.576929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.576973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.589285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef6890 00:17:45.834 [2024-12-11 08:51:53.591699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.834 [2024-12-11 08:51:53.591744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.834 [2024-12-11 08:51:53.605347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef6020 00:17:46.093 [2024-12-11 08:51:53.607703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.093 [2024-12-11 08:51:53.607748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:46.093 [2024-12-11 08:51:53.620453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef57b0 00:17:46.093 [2024-12-11 08:51:53.622532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.093 [2024-12-11 08:51:53.622578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:46.093 [2024-12-11 08:51:53.634844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef4f40 00:17:46.093 [2024-12-11 08:51:53.637004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.093 [2024-12-11 08:51:53.637048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:46.093 [2024-12-11 08:51:53.649479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef46d0 00:17:46.093 [2024-12-11 08:51:53.651743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.093 [2024-12-11 08:51:53.651789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:46.093 [2024-12-11 08:51:53.666101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef3e60 00:17:46.093 [2024-12-11 08:51:53.668400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.668444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.683099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef35f0 00:17:46.094 [2024-12-11 08:51:53.685344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.685389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.698736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef2d80 00:17:46.094 [2024-12-11 08:51:53.700842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.700887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.713530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef2510 00:17:46.094 [2024-12-11 08:51:53.715629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.715675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.728132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef1ca0 00:17:46.094 [2024-12-11 08:51:53.730047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.730092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.742641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef1430 00:17:46.094 [2024-12-11 08:51:53.744671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.744715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.757440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef0bc0 00:17:46.094 [2024-12-11 08:51:53.759403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.759448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.771925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef0350 00:17:46.094 [2024-12-11 08:51:53.773896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.773940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.786568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eefae0 00:17:46.094 [2024-12-11 08:51:53.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.802373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eef270 00:17:46.094 [2024-12-11 08:51:53.804435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.804483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.818660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eeea00 00:17:46.094 [2024-12-11 08:51:53.820754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.820801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.834415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eee190 00:17:46.094 [2024-12-11 08:51:53.836334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.836381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.849487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eed920 00:17:46.094 [2024-12-11 08:51:53.851428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.094 [2024-12-11 08:51:53.851488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:46.094 [2024-12-11 08:51:53.864887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eed0b0 00:17:46.353 [2024-12-11 08:51:53.866919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:46.353 [2024-12-11 08:51:53.880830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eec840 00:17:46.353 [2024-12-11 08:51:53.882718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.882763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:46.353 [2024-12-11 08:51:53.895796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eebfd0 00:17:46.353 [2024-12-11 08:51:53.897676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.897722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:46.353 [2024-12-11 08:51:53.910641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eeb760 00:17:46.353 [2024-12-11 08:51:53.912633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.912679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:46.353 [2024-12-11 08:51:53.925675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eeaef0 00:17:46.353 [2024-12-11 08:51:53.927579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.927608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:46.353 [2024-12-11 08:51:53.940951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eea680 00:17:46.353 [2024-12-11 08:51:53.942750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.942796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:46.353 [2024-12-11 08:51:53.956029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee9e10 00:17:46.353 [2024-12-11 08:51:53.957834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.353 [2024-12-11 08:51:53.957880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:53.971726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee95a0 00:17:46.354 [2024-12-11 08:51:53.973454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:53.973501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:53.987103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee8d30 00:17:46.354 [2024-12-11 08:51:53.988770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:53.988814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.001789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee84c0 00:17:46.354 [2024-12-11 08:51:54.003570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.003615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.016640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee7c50 00:17:46.354 [2024-12-11 08:51:54.018343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.018372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.032871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee73e0 00:17:46.354 [2024-12-11 08:51:54.034635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.034681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.049046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee6b70 00:17:46.354 [2024-12-11 08:51:54.050712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.050757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.064992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee6300 00:17:46.354 [2024-12-11 08:51:54.066694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.066738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.080373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee5a90 00:17:46.354 [2024-12-11 08:51:54.082024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.082070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.354 16700.00 IOPS, 65.23 MiB/s [2024-12-11T08:51:54.128Z] [2024-12-11 08:51:54.097570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee5220 00:17:46.354 [2024-12-11 08:51:54.099181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.099212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:46.354 [2024-12-11 08:51:54.112810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee49b0 00:17:46.354 [2024-12-11 08:51:54.114413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.354 [2024-12-11 08:51:54.114446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.129295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee4140 00:17:46.614 [2024-12-11 08:51:54.130955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.131001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.144808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee38d0 00:17:46.614 [2024-12-11 08:51:54.146367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.146413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.160152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee3060 00:17:46.614 [2024-12-11 08:51:54.161729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.161775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.174939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee27f0 00:17:46.614 [2024-12-11 08:51:54.176457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.176502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.189618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee1f80 00:17:46.614 [2024-12-11 08:51:54.191026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.191092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.204279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee1710 00:17:46.614 [2024-12-11 08:51:54.205690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.205735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.219098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee0ea0 00:17:46.614 [2024-12-11 08:51:54.220672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.220717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.234405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee0630 00:17:46.614 [2024-12-11 08:51:54.235834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.235879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.249072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016edfdc0 00:17:46.614 [2024-12-11 08:51:54.250444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.250489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.263687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016edf550 00:17:46.614 [2024-12-11 08:51:54.265037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.265081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.278159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016edece0 00:17:46.614 [2024-12-11 08:51:54.279543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.279587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.292696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ede470 00:17:46.614 [2024-12-11 08:51:54.293998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.294042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.313163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eddc00 00:17:46.614 [2024-12-11 08:51:54.315651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.315696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.327868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ede470 00:17:46.614 [2024-12-11 08:51:54.330317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.330361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.342546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016edece0 00:17:46.614 [2024-12-11 08:51:54.344946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.344991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.357197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016edf550 00:17:46.614 [2024-12-11 08:51:54.359649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.359693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:46.614 [2024-12-11 08:51:54.372622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016edfdc0 00:17:46.614 [2024-12-11 08:51:54.375106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.614 [2024-12-11 08:51:54.375150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.390042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee0630 00:17:46.874 [2024-12-11 08:51:54.392689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.392734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.406017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee0ea0 00:17:46.874 [2024-12-11 08:51:54.408599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.408643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.421858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee1710 00:17:46.874 [2024-12-11 08:51:54.424352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.424398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.437202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee1f80 00:17:46.874 [2024-12-11 08:51:54.439548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.439593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.451793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee27f0 00:17:46.874 [2024-12-11 08:51:54.454061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.454104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.466294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee3060 00:17:46.874 [2024-12-11 08:51:54.468584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.468629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.480842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee38d0 00:17:46.874 [2024-12-11 08:51:54.483107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.874 [2024-12-11 08:51:54.483161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:46.874 [2024-12-11 08:51:54.495637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee4140 00:17:46.875 [2024-12-11 08:51:54.497908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.497955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.510561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee49b0 00:17:46.875 [2024-12-11 08:51:54.512862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.512906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.525524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee5220 00:17:46.875 [2024-12-11 08:51:54.527986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.528033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.541137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee5a90 00:17:46.875 [2024-12-11 08:51:54.543298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.543330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.555744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee6300 00:17:46.875 [2024-12-11 08:51:54.557932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.557976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.570923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee6b70 00:17:46.875 [2024-12-11 08:51:54.573104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.573155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.585787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee73e0 00:17:46.875 [2024-12-11 08:51:54.587970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.588015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.600672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee7c50 00:17:46.875 [2024-12-11 08:51:54.602739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.602782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.615273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee84c0 00:17:46.875 [2024-12-11 08:51:54.617372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.617416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.630105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee8d30 00:17:46.875 [2024-12-11 08:51:54.632249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.875 [2024-12-11 08:51:54.632279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:46.875 [2024-12-11 08:51:54.645765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee95a0 00:17:47.161 [2024-12-11 08:51:54.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.648159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.662646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ee9e10 00:17:47.161 [2024-12-11 08:51:54.664755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.664788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.678274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eea680 00:17:47.161 [2024-12-11 08:51:54.680485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.680565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.695314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eeaef0 00:17:47.161 [2024-12-11 08:51:54.697634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.697683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.711894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eeb760 00:17:47.161 [2024-12-11 08:51:54.714027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.714071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.726970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eebfd0 00:17:47.161 [2024-12-11 08:51:54.729027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.729071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.741857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eec840 00:17:47.161 [2024-12-11 08:51:54.743913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.743957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.756633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eed0b0 00:17:47.161 [2024-12-11 08:51:54.758550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.758597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.771113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eed920 00:17:47.161 [2024-12-11 08:51:54.773036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.773081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.785693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eee190 00:17:47.161 [2024-12-11 08:51:54.787605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.787651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.800308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eeea00 00:17:47.161 [2024-12-11 08:51:54.802207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.802244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.814846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eef270 00:17:47.161 [2024-12-11 08:51:54.816767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.816812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:47.161 [2024-12-11 08:51:54.829555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016eefae0 00:17:47.161 [2024-12-11 08:51:54.831391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.161 [2024-12-11 08:51:54.831438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:47.162 [2024-12-11 08:51:54.844215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef0350 00:17:47.162 [2024-12-11 08:51:54.846012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.162 [2024-12-11 08:51:54.846058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:47.162 [2024-12-11 08:51:54.858837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef0bc0 00:17:47.162 [2024-12-11 08:51:54.860726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.162 [2024-12-11 08:51:54.860773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:47.162 [2024-12-11 08:51:54.873574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef1430 00:17:47.162 [2024-12-11 08:51:54.875477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.162 [2024-12-11 08:51:54.875521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:47.162 [2024-12-11 08:51:54.888285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef1ca0 00:17:47.162 [2024-12-11 08:51:54.890039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.162 [2024-12-11 08:51:54.890084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:47.162 [2024-12-11 08:51:54.904892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef2510 00:17:47.162 [2024-12-11 08:51:54.906810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.162 [2024-12-11 08:51:54.906841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:47.162 [2024-12-11 08:51:54.920620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef2d80 00:17:47.162 [2024-12-11 08:51:54.922378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.162 [2024-12-11 08:51:54.922423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:47.420 [2024-12-11 08:51:54.936243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef35f0 00:17:47.421 [2024-12-11 08:51:54.938109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:54.938163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:54.950959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef3e60 00:17:47.421 [2024-12-11 08:51:54.952763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:54.952808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:54.965823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef46d0 00:17:47.421 [2024-12-11 08:51:54.967574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:54.967618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:54.980898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef4f40 00:17:47.421 [2024-12-11 08:51:54.982675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:54.982722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:54.996997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef57b0 00:17:47.421 [2024-12-11 08:51:54.998738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:54.998787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:55.013326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef6020 00:17:47.421 [2024-12-11 08:51:55.015109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:55.015166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:55.028882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef6890 00:17:47.421 [2024-12-11 08:51:55.030562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:55.030608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:55.044179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef7100 00:17:47.421 [2024-12-11 08:51:55.045749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:55.045794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:55.059336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef7970 00:17:47.421 [2024-12-11 08:51:55.060919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:55.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:55.074420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef81e0 00:17:47.421 [2024-12-11 08:51:55.076010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:55.076056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:47.421 [2024-12-11 08:51:55.089403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06770) with pdu=0x200016ef8a50 00:17:47.421 [2024-12-11 08:51:55.090977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:47.421 [2024-12-11 08:51:55.091022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:47.421 16699.00 IOPS, 65.23 MiB/s 00:17:47.421 Latency(us) 00:17:47.421 [2024-12-11T08:51:55.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.421 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.421 nvme0n1 : 2.00 16667.52 65.11 0.00 0.00 7667.08 5213.09 27763.43 00:17:47.421 [2024-12-11T08:51:55.195Z] =================================================================================================================== 00:17:47.421 [2024-12-11T08:51:55.195Z] Total : 16667.52 65.11 0.00 0.00 7667.08 5213.09 27763.43 00:17:47.421 { 00:17:47.421 "results": [ 00:17:47.421 { 00:17:47.421 "job": "nvme0n1", 00:17:47.421 "core_mask": "0x2", 00:17:47.421 "workload": "randwrite", 00:17:47.421 "status": "finished", 00:17:47.421 "queue_depth": 128, 00:17:47.421 "io_size": 4096, 00:17:47.421 "runtime": 2.003898, 00:17:47.421 "iops": 16667.515013239197, 00:17:47.421 "mibps": 65.10748052046561, 00:17:47.421 "io_failed": 0, 00:17:47.421 "io_timeout": 0, 00:17:47.421 "avg_latency_us": 7667.080542623842, 00:17:47.421 "min_latency_us": 5213.090909090909, 00:17:47.421 "max_latency_us": 27763.432727272728 00:17:47.421 } 00:17:47.421 ], 00:17:47.421 "core_count": 1 00:17:47.421 } 00:17:47.421 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:47.421 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:47.421 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:47.421 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:47.421 | .driver_specific 00:17:47.421 | .nvme_error 00:17:47.421 | .status_code 00:17:47.421 | .command_transient_transport_error' 00:17:47.679 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:17:47.679 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81042 00:17:47.679 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81042 ']' 00:17:47.679 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81042 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81042 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:47.680 killing process with pid 81042 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81042' 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81042 00:17:47.680 Received shutdown signal, test time was about 2.000000 seconds 00:17:47.680 00:17:47.680 Latency(us) 00:17:47.680 [2024-12-11T08:51:55.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.680 [2024-12-11T08:51:55.454Z] =================================================================================================================== 00:17:47.680 [2024-12-11T08:51:55.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.680 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81042 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81095 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81095 /var/tmp/bperf.sock 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81095 ']' 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.939 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:47.939 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:47.939 Zero copy mechanism will not be used. 00:17:47.939 [2024-12-11 08:51:55.618940] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:47.939 [2024-12-11 08:51:55.619025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81095 ] 00:17:48.198 [2024-12-11 08:51:55.757691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.198 [2024-12-11 08:51:55.788878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.198 [2024-12-11 08:51:55.818200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.198 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.198 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:48.198 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:48.198 08:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:48.456 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:48.456 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.456 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.456 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.456 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.456 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.715 nvme0n1 00:17:48.715 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:48.715 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.715 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.975 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.975 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:48.975 08:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.975 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.975 Zero copy mechanism will not be used. 00:17:48.975 Running I/O for 2 seconds... 00:17:48.975 [2024-12-11 08:51:56.626295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.626385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.626414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.631308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.631418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.631441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.636251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.636325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.636348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.641092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.641192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.641215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.645945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.646052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.650807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.650894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.650916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.655624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.655698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.655720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.660413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.660499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.660520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.665126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.665225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.665247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.669904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.669981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.670002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.674674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.674756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.674777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:48.975 [2024-12-11 08:51:56.679534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.975 [2024-12-11 08:51:56.679620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.975 [2024-12-11 08:51:56.679642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.684262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.684332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.684353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.689127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.689257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.689278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.694010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.694088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.694110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.698839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.698924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.698945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.703689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.703773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.703795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.708459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.708560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.708581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.713223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.713309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.713330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.718036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.718109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.718130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.723135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.723211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.723237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.728445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.728539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.728562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.733911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.734025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.734047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.739606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.739701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.739725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:48.976 [2024-12-11 08:51:56.745427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:48.976 [2024-12-11 08:51:56.745526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.976 [2024-12-11 08:51:56.745551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.751045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.751187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.751210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.756070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.756160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.756182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.761209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.761313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.761334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.766295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.766379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.766400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.771102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.771202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.771225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.775868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.775954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.775975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.780618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.780700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.780721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.785505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.785590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.785611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.790301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.790385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.790407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.795012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.795157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.795194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.799860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.799936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.804681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.804768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.804789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.809552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.809637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.809659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.814375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.814460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.814481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.819231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.819309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.819332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.824068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.824158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.824180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.828926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.829001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.829024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.833751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.833848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.833869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.838583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.838683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.838704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.843456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.843557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.843578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.848197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.848285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.848306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.852874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.852950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.852971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.857704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.237 [2024-12-11 08:51:56.857775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.237 [2024-12-11 08:51:56.857796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.237 [2024-12-11 08:51:56.862562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.862631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.862652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.867747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.867821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.867842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.873148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.873243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.873266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.878093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.878194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.878216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.882860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.882937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.882958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.887822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.887894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.887915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.892663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.892746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.892766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.897437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.897539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.897560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.902234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.902318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.902339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.906954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.907038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.907099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.911782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.911877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.911898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.916627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.916705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.916725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.921427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.921529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.921550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.926260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.926346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.926366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.931024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.931138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.931195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.936047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.936120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.936157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.940876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.940960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.940980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.945747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.945836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.945857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.950555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.950640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.950662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.955399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.955516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.955538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.960194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.960283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.960305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.965029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.965106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.965127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.969812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.969921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.969942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.974604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.974699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.979486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.979564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.979585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.984202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.984276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.984298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.989044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.989118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.238 [2024-12-11 08:51:56.989140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.238 [2024-12-11 08:51:56.993973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.238 [2024-12-11 08:51:56.994058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.239 [2024-12-11 08:51:56.994079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.239 [2024-12-11 08:51:56.998825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.239 [2024-12-11 08:51:56.998911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.239 [2024-12-11 08:51:56.998932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.239 [2024-12-11 08:51:57.003793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.239 [2024-12-11 08:51:57.003913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.239 [2024-12-11 08:51:57.003934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.009087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.009174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.009196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.014095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.014211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.014233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.018901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.018985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.019006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.023799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.023874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.023895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.028684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.028757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.028779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.033556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.033643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.033665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.038366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.038441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.038462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.043118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.043222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.043245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.047913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.047989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.048010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.052694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.052782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.052802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.057485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.057573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.057594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.062236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.062323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.062344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.066957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.067045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.067108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.071871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.071957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.071978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.076610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.076696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.076717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.081475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.081582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.081603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.086272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.086359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.086380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.090985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.091100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.091123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.095905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.095991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.096012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.100673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.100749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.100770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.105414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.105502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.110096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.110218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.110239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.114824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.114913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.114934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.119718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.119794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.119816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.124472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.124549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.124570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.129303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.129384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.500 [2024-12-11 08:51:57.129405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.500 [2024-12-11 08:51:57.134039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.500 [2024-12-11 08:51:57.134114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.134135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.138981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.139126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.139149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.143933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.144029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.144051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.148660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.148734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.148755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.153441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.153527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.153563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.158136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.158235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.158255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.162874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.162974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.162995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.167747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.167833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.167853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.172556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.172630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.172650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.177341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.177428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.177449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.182077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.182161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.186760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.186841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.186862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.191640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.191719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.191739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.196540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.196635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.196657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.201309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.201398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.201420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.206008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.206093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.206114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.210743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.210831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.210852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.215572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.215658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.215680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.220270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.220357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.220378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.224993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.225078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.225099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.229878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.229951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.229972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.234603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.234685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.234706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.239952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.240029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.240052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.245286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.245368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.245389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.250039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.250125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.250146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.254773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.254861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.254882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.501 [2024-12-11 08:51:57.259605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.501 [2024-12-11 08:51:57.259701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.501 [2024-12-11 08:51:57.259722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.502 [2024-12-11 08:51:57.264421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.502 [2024-12-11 08:51:57.264506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.502 [2024-12-11 08:51:57.264527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.502 [2024-12-11 08:51:57.269644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.502 [2024-12-11 08:51:57.269728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.502 [2024-12-11 08:51:57.269749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.274659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.274747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.274767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.279809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.279907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.279927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.284677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.284772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.284793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.289555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.289640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.289661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.294396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.294483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.294520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.299153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.299247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.299270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.303941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.304019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.304039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.308765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.308851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.308871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.313577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.313662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.313683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.318367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.318452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.323079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.323208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.323231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.327860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.327955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.327976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.332676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.332757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.332777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.337485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.337574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.337595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.342303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.342396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.342417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.347128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.347216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.347239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.351868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.351940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.351961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.356633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.356706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.356727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.361878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.361953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.361975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.366933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.367021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.367042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.372001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.372063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.372086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.377396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.762 [2024-12-11 08:51:57.377502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.762 [2024-12-11 08:51:57.377525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.762 [2024-12-11 08:51:57.382826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.382930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.382952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.388159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.388282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.388304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.393428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.393506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.393543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.398416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.398545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.398567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.403558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.403630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.403653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.408847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.408937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.408959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.413723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.413810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.413831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.418540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.418618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.418639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.423506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.423583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.423604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.428395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.428475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.428496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.433277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.433360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.433381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.438376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.438467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.438506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.443790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.443911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.443933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.449316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.449385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.449408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.454656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.454720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.454743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.459946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.460052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.460075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.465393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.465460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.465484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.470755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.470848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.470871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.476125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.476271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.476294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.481444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.481551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.486501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.486591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.486612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.491624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.491714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.491737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.496864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.496934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.496957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.502174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.502311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.502335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.507101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.507185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.507209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.512216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.512288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.512310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.517159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.517249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.517270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.763 [2024-12-11 08:51:57.522398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.763 [2024-12-11 08:51:57.522488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.763 [2024-12-11 08:51:57.522511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:49.764 [2024-12-11 08:51:57.527665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:49.764 [2024-12-11 08:51:57.527745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.764 [2024-12-11 08:51:57.527767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.533090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.533197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.533233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.538329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.538457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.538480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.543356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.543424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.543447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.548296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.548383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.548404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.553453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.553520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.553544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.558516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.558618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.558639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.563574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.563659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.563681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.568742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.568816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.568838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.573655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.573738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.573759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.024 [2024-12-11 08:51:57.578746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.024 [2024-12-11 08:51:57.578840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.024 [2024-12-11 08:51:57.578861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.583721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.583819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.583856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.588721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.588809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.588830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.593741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.593828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.593849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.598585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.598672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.598692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.603541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.603626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.603648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.608304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.608390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.608411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.613056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.613140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.613176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 6223.00 IOPS, 777.88 MiB/s [2024-12-11T08:51:57.799Z] [2024-12-11 08:51:57.619120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.619207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.619231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.623972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.624058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.624079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.628879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.628994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.629015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.633833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.633918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.633939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.638738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.638819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.638840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.643651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.643736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.643757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.648574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.648660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.648680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.653419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.653506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.653527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.658294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.658356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.658377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.663195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.663270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.663293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.668015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.668090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.668110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.672952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.673029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.673050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.677740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.677826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.677847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.682613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.682686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.682707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.687294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.687406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.687441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.692201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.692297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.692327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.696986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.697060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.697080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.701805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.701908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.701928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.706548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.706635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.706656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.711316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.025 [2024-12-11 08:51:57.711456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.025 [2024-12-11 08:51:57.711477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.025 [2024-12-11 08:51:57.716118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.716245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.716266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.720877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.720961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.720982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.725650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.725724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.725745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.730510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.730620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.730641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.735293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.735399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.735433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.740090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.740218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.740239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.744999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.745087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.745107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.750110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.750241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.750264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.755485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.755606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.761087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.761194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.761218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.766376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.766454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.766476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.771565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.771659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.771682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.776674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.776760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.776782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.781612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.781699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.781719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.786369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.786454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.786474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.026 [2024-12-11 08:51:57.791256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.026 [2024-12-11 08:51:57.791351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.026 [2024-12-11 08:51:57.791374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.287 [2024-12-11 08:51:57.796545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.287 [2024-12-11 08:51:57.796629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.287 [2024-12-11 08:51:57.796650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.287 [2024-12-11 08:51:57.801615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.287 [2024-12-11 08:51:57.801768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.287 [2024-12-11 08:51:57.801791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.287 [2024-12-11 08:51:57.806935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.287 [2024-12-11 08:51:57.807022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.287 [2024-12-11 08:51:57.807042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.287 [2024-12-11 08:51:57.812166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.287 [2024-12-11 08:51:57.812263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.287 [2024-12-11 08:51:57.812283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.287 [2024-12-11 08:51:57.816908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.816994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.817014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.821826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.821907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.821928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.826622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.826708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.826728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.831475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.831549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.831569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.836274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.836360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.836380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.841038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.841109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.841129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.845859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.845935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.845955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.850587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.850669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.850689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.855439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.855512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.855533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.860102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.860187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.860222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.864847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.864939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.864960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.869776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.869858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.869879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.874801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.874886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.874906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.879705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.879791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.879812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.884527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.884625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.884646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.889322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.889419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.889439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.894030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.894107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.894127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.898840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.898915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.898936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.903689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.903775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.903796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.908460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.908569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.908591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.913252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.913319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.913340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.917932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.918019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.918040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.922755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.922828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.922849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.927652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.927738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.927759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.932467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.932556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.932577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.937214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.937299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.937321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.942043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.942116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.288 [2024-12-11 08:51:57.942136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.288 [2024-12-11 08:51:57.946896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.288 [2024-12-11 08:51:57.946969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.946990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.951801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.951874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.951895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.956570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.956644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.956665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.961353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.961439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.961460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.966092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.966184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.966205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.970832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.970906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.970927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.975730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.975974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.975997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.980868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.980975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.980996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.985636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.985715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.985736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.990364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.990442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.990463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.995078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:57.995220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:57.995244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:57.999955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.000203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.000225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.004952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.005029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.009687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.009778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.009800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.014501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.014580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.014601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.019275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.019379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.019416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.024019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.024109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.024129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.028825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.028932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.028953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.033734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.033811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.033832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.038464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.038542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.038564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.043196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.043282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.043305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.048003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.048087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.048108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.052785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.052865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.052901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.289 [2024-12-11 08:51:58.058086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.289 [2024-12-11 08:51:58.058253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.289 [2024-12-11 08:51:58.058276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.062999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.063253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.063276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.068233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.068477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.068724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.073329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.073567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.073830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.078365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.078631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.078800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.083484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.083746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.084006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.088598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.088852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.089068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.093811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.094067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.094270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.098838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.099137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.099340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.103955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.104242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.104383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.109096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.109231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.109253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.113859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.113936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.113957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.118560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.118808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.118830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.123722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.123971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.124188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.128759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.129033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.129200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.133811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.134062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.134257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.138938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.139215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.139479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.144030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.144322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.144515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.149224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.149670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.154279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.154359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.154382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.158914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.158992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.159013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.163806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.163906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.163927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.168678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.168757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.168779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.173654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.173928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.173951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.179315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.179386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.179411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.184259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.550 [2024-12-11 08:51:58.184355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.550 [2024-12-11 08:51:58.184376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.550 [2024-12-11 08:51:58.188986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.189087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.189107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.193913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.194005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.194025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.198670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.198750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.198771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.203363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.203510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.203531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.208159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.208268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.208289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.212990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.213058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.213079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.217942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.218023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.218044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.222789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.223081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.223105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.228045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.228126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.228146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.233045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.233125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.233161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.237866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.237966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.237988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.242727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.242964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.242986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.247887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.247968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.247988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.252722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.252816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.252838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.257587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.257679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.257700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.262370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.262475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.262496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.267098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.267227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.267251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.271905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.271983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.272004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.276716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.276809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.276830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.281541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.281643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.281681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.286716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.286799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.286820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.291825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.291917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.291938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.296704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.296797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.296819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.301567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.301645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.301666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.306381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.306463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.306485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.311157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.311251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.311286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.551 [2024-12-11 08:51:58.316022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.551 [2024-12-11 08:51:58.316103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.551 [2024-12-11 08:51:58.316124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.321262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.321338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.321360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.326326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.326422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.326444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.331237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.331308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.331332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.336006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.336084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.336105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.340903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.340998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.341019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.345832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.346071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.346093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.351001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.351135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.351178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.355865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.355971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.355992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.360726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.360840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.365569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.365658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.365678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.370275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.370361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.370382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.374981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.375115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.375138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.379855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.379934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.379954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.384666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.384758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.384779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.389593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.389672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.389694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.394253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.394345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.394373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.399026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.399161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.812 [2024-12-11 08:51:58.399214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.812 [2024-12-11 08:51:58.403860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.812 [2024-12-11 08:51:58.403938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.403958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.408669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.408760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.408782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.413392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.413487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.413508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.418119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.418229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.422850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.422950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.422970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.427817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.427896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.427917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.432912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.433164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.433211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.438488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.438570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.438592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.443433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.443528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.443549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.448163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.448285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.448316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.452987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.453258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.453282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.458078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.458354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.458543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.463353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.463698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.463877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.468843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.469122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.469511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.474398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.474689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.474926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.480091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.480354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.480537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.485773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.486074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.486300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.491295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.491595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.491788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.496651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.496910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.497088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.502041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.502124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.502162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.507274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.507347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.507371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.512441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.512568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.512589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.517382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.517461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.517483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.522173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.522247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.522268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.527098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.527236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.527261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.531982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.532244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.532280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.537098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.537366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.537572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.813 [2024-12-11 08:51:58.542127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.813 [2024-12-11 08:51:58.542414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.813 [2024-12-11 08:51:58.542577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.546993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.547296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.547492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.552007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.552297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.552471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.557185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.557462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.557630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.562479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.562768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.562927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.567775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.568049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.568252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.573445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.573725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.573942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.814 [2024-12-11 08:51:58.579008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:50.814 [2024-12-11 08:51:58.579136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.814 [2024-12-11 08:51:58.579176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:51.073 [2024-12-11 08:51:58.584613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.584710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.584734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:51.073 [2024-12-11 08:51:58.590272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.590386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.590410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:51.073 [2024-12-11 08:51:58.595837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.596073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.596098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:51.073 [2024-12-11 08:51:58.601578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.601807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.602181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:51.073 [2024-12-11 08:51:58.607113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.607347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.607602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:51.073 [2024-12-11 08:51:58.612538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.612808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.612981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:51.073 6228.00 IOPS, 778.50 MiB/s [2024-12-11T08:51:58.847Z] [2024-12-11 08:51:58.619090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b06ab0) with pdu=0x200016efef90 00:17:51.073 [2024-12-11 08:51:58.619349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.073 [2024-12-11 08:51:58.619575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:51.073 00:17:51.073 Latency(us) 00:17:51.073 [2024-12-11T08:51:58.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.073 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:51.073 nvme0n1 : 2.00 6223.72 777.96 0.00 0.00 2564.76 1966.08 13107.20 00:17:51.073 [2024-12-11T08:51:58.847Z] =================================================================================================================== 00:17:51.073 [2024-12-11T08:51:58.847Z] Total : 6223.72 777.96 0.00 0.00 2564.76 1966.08 13107.20 00:17:51.073 { 00:17:51.073 "results": [ 00:17:51.073 { 00:17:51.073 "job": "nvme0n1", 00:17:51.073 "core_mask": "0x2", 00:17:51.073 "workload": "randwrite", 00:17:51.073 "status": "finished", 00:17:51.073 "queue_depth": 16, 00:17:51.073 "io_size": 131072, 00:17:51.073 "runtime": 2.004751, 00:17:51.073 "iops": 6223.715563678482, 00:17:51.073 "mibps": 777.9644454598102, 00:17:51.073 "io_failed": 0, 00:17:51.073 "io_timeout": 0, 00:17:51.073 "avg_latency_us": 2564.758477198044, 00:17:51.073 "min_latency_us": 1966.08, 00:17:51.073 "max_latency_us": 13107.2 00:17:51.073 } 00:17:51.073 ], 00:17:51.073 "core_count": 1 00:17:51.073 } 00:17:51.073 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:51.073 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:51.073 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:51.073 | .driver_specific 00:17:51.073 | .nvme_error 00:17:51.073 | .status_code 00:17:51.073 | .command_transient_transport_error' 00:17:51.073 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 403 > 0 )) 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81095 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81095 ']' 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81095 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81095 00:17:51.332 killing process with pid 81095 00:17:51.332 Received shutdown signal, test time was about 2.000000 seconds 00:17:51.332 00:17:51.332 Latency(us) 00:17:51.332 [2024-12-11T08:51:59.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.332 [2024-12-11T08:51:59.106Z] =================================================================================================================== 00:17:51.332 [2024-12-11T08:51:59.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:51.332 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81095' 00:17:51.333 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81095 00:17:51.333 08:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81095 00:17:51.333 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80922 00:17:51.333 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80922 ']' 00:17:51.333 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80922 00:17:51.333 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:51.333 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.333 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80922 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.591 killing process with pid 80922 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80922' 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80922 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80922 00:17:51.591 ************************************ 00:17:51.591 END TEST nvmf_digest_error 00:17:51.591 ************************************ 00:17:51.591 00:17:51.591 real 0m14.817s 00:17:51.591 user 0m29.245s 00:17:51.591 sys 0m4.183s 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:51.591 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:51.592 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.592 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.850 rmmod nvme_tcp 00:17:51.850 rmmod nvme_fabrics 00:17:51.850 rmmod nvme_keyring 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.850 Process with pid 80922 is not found 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80922 ']' 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80922 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80922 ']' 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80922 00:17:51.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80922) - No such process 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80922 is not found' 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.850 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:52.109 00:17:52.109 real 0m30.700s 00:17:52.109 user 0m58.574s 00:17:52.109 sys 0m8.767s 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.109 ************************************ 00:17:52.109 END TEST nvmf_digest 00:17:52.109 ************************************ 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.109 08:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.110 ************************************ 00:17:52.110 START TEST nvmf_host_multipath 00:17:52.110 ************************************ 00:17:52.110 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:52.110 * Looking for test storage... 00:17:52.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.110 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.110 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.110 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.369 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.369 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.370 --rc genhtml_branch_coverage=1 00:17:52.370 --rc genhtml_function_coverage=1 00:17:52.370 --rc genhtml_legend=1 00:17:52.370 --rc geninfo_all_blocks=1 00:17:52.370 --rc geninfo_unexecuted_blocks=1 00:17:52.370 00:17:52.370 ' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.370 --rc genhtml_branch_coverage=1 00:17:52.370 --rc genhtml_function_coverage=1 00:17:52.370 --rc genhtml_legend=1 00:17:52.370 --rc geninfo_all_blocks=1 00:17:52.370 --rc geninfo_unexecuted_blocks=1 00:17:52.370 00:17:52.370 ' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.370 --rc genhtml_branch_coverage=1 00:17:52.370 --rc genhtml_function_coverage=1 00:17:52.370 --rc genhtml_legend=1 00:17:52.370 --rc geninfo_all_blocks=1 00:17:52.370 --rc geninfo_unexecuted_blocks=1 00:17:52.370 00:17:52.370 ' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.370 --rc genhtml_branch_coverage=1 00:17:52.370 --rc genhtml_function_coverage=1 00:17:52.370 --rc genhtml_legend=1 00:17:52.370 --rc geninfo_all_blocks=1 00:17:52.370 --rc geninfo_unexecuted_blocks=1 00:17:52.370 00:17:52.370 ' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.370 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.371 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.371 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.371 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.371 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.371 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.371 Cannot find device "nvmf_init_br" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.371 Cannot find device "nvmf_init_br2" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.371 Cannot find device "nvmf_tgt_br" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.371 Cannot find device "nvmf_tgt_br2" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.371 Cannot find device "nvmf_init_br" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.371 Cannot find device "nvmf_init_br2" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.371 Cannot find device "nvmf_tgt_br" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.371 Cannot find device "nvmf_tgt_br2" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:52.371 Cannot find device "nvmf_br" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:52.371 Cannot find device "nvmf_init_if" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:52.371 Cannot find device "nvmf_init_if2" 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:52.371 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:52.630 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:52.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:17:52.630 00:17:52.630 --- 10.0.0.3 ping statistics --- 00:17:52.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.631 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:52.631 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:52.631 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:52.631 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:52.631 00:17:52.631 --- 10.0.0.4 ping statistics --- 00:17:52.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.631 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:52.631 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:52.631 00:17:52.631 --- 10.0.0.1 ping statistics --- 00:17:52.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.631 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:52.631 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:52.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:52.890 00:17:52.890 --- 10.0.0.2 ping statistics --- 00:17:52.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.890 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:52.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81399 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81399 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81399 ']' 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.890 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:52.890 [2024-12-11 08:52:00.495796] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:17:52.890 [2024-12-11 08:52:00.496429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.890 [2024-12-11 08:52:00.648382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:53.149 [2024-12-11 08:52:00.687285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.149 [2024-12-11 08:52:00.687343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.149 [2024-12-11 08:52:00.687357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.149 [2024-12-11 08:52:00.687367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.149 [2024-12-11 08:52:00.687376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.149 [2024-12-11 08:52:00.688270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.149 [2024-12-11 08:52:00.688902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.149 [2024-12-11 08:52:00.723118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81399 00:17:53.149 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:53.409 [2024-12-11 08:52:01.110959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.409 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:53.667 Malloc0 00:17:53.667 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:53.925 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.184 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:54.443 [2024-12-11 08:52:02.093148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.444 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:54.702 [2024-12-11 08:52:02.345068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:54.702 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:54.702 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81447 00:17:54.702 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.702 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81447 /var/tmp/bdevperf.sock 00:17:54.702 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81447 ']' 00:17:54.703 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.703 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.703 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.703 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.703 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:55.270 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.270 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:55.270 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:55.270 08:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:55.529 Nvme0n1 00:17:55.529 08:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:56.097 Nvme0n1 00:17:56.097 08:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:56.097 08:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:57.048 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:57.048 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:57.307 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:57.566 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:57.566 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81485 00:17:57.566 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:57.566 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:04.134 Attaching 4 probes... 00:18:04.134 @path[10.0.0.3, 4421]: 18468 00:18:04.134 @path[10.0.0.3, 4421]: 18874 00:18:04.134 @path[10.0.0.3, 4421]: 18596 00:18:04.134 @path[10.0.0.3, 4421]: 19167 00:18:04.134 @path[10.0.0.3, 4421]: 18550 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81485 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:04.134 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:04.393 08:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:04.393 08:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81604 00:18:04.393 08:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:04.393 08:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.958 Attaching 4 probes... 00:18:10.958 @path[10.0.0.3, 4420]: 18391 00:18:10.958 @path[10.0.0.3, 4420]: 18961 00:18:10.958 @path[10.0.0.3, 4420]: 17864 00:18:10.958 @path[10.0.0.3, 4420]: 17885 00:18:10.958 @path[10.0.0.3, 4420]: 18753 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81604 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:10.958 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:11.216 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:11.475 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:11.475 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:11.475 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81722 00:18:11.475 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.041 Attaching 4 probes... 00:18:18.041 @path[10.0.0.3, 4421]: 14154 00:18:18.041 @path[10.0.0.3, 4421]: 18825 00:18:18.041 @path[10.0.0.3, 4421]: 18768 00:18:18.041 @path[10.0.0.3, 4421]: 19172 00:18:18.041 @path[10.0.0.3, 4421]: 18872 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81722 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:18.041 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:18.300 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:18.300 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81833 00:18:18.300 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:18.300 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:24.867 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:24.867 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:24.868 Attaching 4 probes... 00:18:24.868 00:18:24.868 00:18:24.868 00:18:24.868 00:18:24.868 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81833 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:24.868 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:25.435 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:25.435 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:25.435 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81947 00:18:25.435 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:32.001 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:32.001 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:32.001 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:32.001 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.001 Attaching 4 probes... 00:18:32.001 @path[10.0.0.3, 4421]: 17716 00:18:32.001 @path[10.0.0.3, 4421]: 18908 00:18:32.001 @path[10.0.0.3, 4421]: 17362 00:18:32.001 @path[10.0.0.3, 4421]: 17544 00:18:32.001 @path[10.0.0.3, 4421]: 18182 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81947 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:32.002 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:32.939 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:32.939 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82076 00:18:32.939 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:32.939 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.506 Attaching 4 probes... 00:18:39.506 @path[10.0.0.3, 4420]: 16784 00:18:39.506 @path[10.0.0.3, 4420]: 17171 00:18:39.506 @path[10.0.0.3, 4420]: 16901 00:18:39.506 @path[10.0.0.3, 4420]: 17121 00:18:39.506 @path[10.0.0.3, 4420]: 17091 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82076 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.506 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:39.506 [2024-12-11 08:52:47.140343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:39.506 08:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:39.764 08:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:46.327 08:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:46.327 08:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82249 00:18:46.327 08:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81399 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.327 08:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:52.898 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:52.898 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.899 Attaching 4 probes... 00:18:52.899 @path[10.0.0.3, 4421]: 15486 00:18:52.899 @path[10.0.0.3, 4421]: 15230 00:18:52.899 @path[10.0.0.3, 4421]: 17270 00:18:52.899 @path[10.0.0.3, 4421]: 17317 00:18:52.899 @path[10.0.0.3, 4421]: 17097 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82249 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81447 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81447 ']' 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81447 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81447 00:18:52.899 killing process with pid 81447 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81447' 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81447 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81447 00:18:52.899 { 00:18:52.899 "results": [ 00:18:52.899 { 00:18:52.899 "job": "Nvme0n1", 00:18:52.899 "core_mask": "0x4", 00:18:52.899 "workload": "verify", 00:18:52.899 "status": "terminated", 00:18:52.899 "verify_range": { 00:18:52.899 "start": 0, 00:18:52.899 "length": 16384 00:18:52.899 }, 00:18:52.899 "queue_depth": 128, 00:18:52.899 "io_size": 4096, 00:18:52.899 "runtime": 56.069403, 00:18:52.899 "iops": 7583.601344926037, 00:18:52.899 "mibps": 29.623442753617333, 00:18:52.899 "io_failed": 0, 00:18:52.899 "io_timeout": 0, 00:18:52.899 "avg_latency_us": 16845.936156234126, 00:18:52.899 "min_latency_us": 785.6872727272727, 00:18:52.899 "max_latency_us": 7046430.72 00:18:52.899 } 00:18:52.899 ], 00:18:52.899 "core_count": 1 00:18:52.899 } 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81447 00:18:52.899 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:52.899 [2024-12-11 08:52:02.418754] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:18:52.899 [2024-12-11 08:52:02.418855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81447 ] 00:18:52.899 [2024-12-11 08:52:02.570998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.899 [2024-12-11 08:52:02.611275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.899 [2024-12-11 08:52:02.643484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:52.899 Running I/O for 90 seconds... 00:18:52.899 7189.00 IOPS, 28.08 MiB/s [2024-12-11T08:53:00.673Z] 8109.50 IOPS, 31.68 MiB/s [2024-12-11T08:53:00.673Z] 8545.00 IOPS, 33.38 MiB/s [2024-12-11T08:53:00.673Z] 8782.75 IOPS, 34.31 MiB/s [2024-12-11T08:53:00.673Z] 8879.00 IOPS, 34.68 MiB/s [2024-12-11T08:53:00.673Z] 8992.50 IOPS, 35.13 MiB/s [2024-12-11T08:53:00.673Z] 9046.14 IOPS, 35.34 MiB/s [2024-12-11T08:53:00.673Z] 9096.38 IOPS, 35.53 MiB/s [2024-12-11T08:53:00.673Z] [2024-12-11 08:52:12.117296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.899 [2024-12-11 08:52:12.117697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.117732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.117795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.117881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.117917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.117973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.117994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.118010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.118031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.118045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.118066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.118080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.118103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.118132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.118152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.118166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.118186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.899 [2024-12-11 08:52:12.118200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:52.899 [2024-12-11 08:52:12.118233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.118251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.118293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.118328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.118363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.118957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.118992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.900 [2024-12-11 08:52:12.119653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:52.900 [2024-12-11 08:52:12.119715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.900 [2024-12-11 08:52:12.119730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.119771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.119807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.119842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.119892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.119943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.119978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.119998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.901 [2024-12-11 08:52:12.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.120967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.120981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.121000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.121014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.121033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.121046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.121066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.121099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.121112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.121137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.121151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:52.901 [2024-12-11 08:52:12.121197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.901 [2024-12-11 08:52:12.121213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.121544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.121573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.123304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.123352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.123390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.123428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.902 [2024-12-11 08:52:12.123480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.123977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.123992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.124013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.124027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.124048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.124062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.124083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.124097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:12.124121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:12.124137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:52.902 9092.00 IOPS, 35.52 MiB/s [2024-12-11T08:53:00.676Z] 9114.00 IOPS, 35.60 MiB/s [2024-12-11T08:53:00.676Z] 9140.00 IOPS, 35.70 MiB/s [2024-12-11T08:53:00.676Z] 9115.67 IOPS, 35.61 MiB/s [2024-12-11T08:53:00.676Z] 9114.77 IOPS, 35.60 MiB/s [2024-12-11T08:53:00.676Z] 9127.14 IOPS, 35.65 MiB/s [2024-12-11T08:53:00.676Z] 9128.27 IOPS, 35.66 MiB/s [2024-12-11T08:53:00.676Z] [2024-12-11 08:52:18.807222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:18.807277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:18.807331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:18.807353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:52.902 [2024-12-11 08:52:18.807376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.902 [2024-12-11 08:52:18.807391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.903 [2024-12-11 08:52:18.807462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.903 [2024-12-11 08:52:18.807497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.903 [2024-12-11 08:52:18.807557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.903 [2024-12-11 08:52:18.807592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.903 [2024-12-11 08:52:18.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.807965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.807979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.903 [2024-12-11 08:52:18.808356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.903 [2024-12-11 08:52:18.808376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.808398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.808434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.808468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.808569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.808978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.808992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.904 [2024-12-11 08:52:18.809501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.904 [2024-12-11 08:52:18.809930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:52.904 [2024-12-11 08:52:18.809965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.809979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.809999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.810013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.810046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.810079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.810113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.810146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.810974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.810988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.811022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.905 [2024-12-11 08:52:18.811381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:52.905 [2024-12-11 08:52:18.811402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.905 [2024-12-11 08:52:18.811418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.811454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.811483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.811503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.811529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.811550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.811564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.811585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.811600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.811621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.811635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.811657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.811671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.906 [2024-12-11 08:52:18.812380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.812979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.812994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.813022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.813045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.813075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.813091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:18.813119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:18.813134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:52.906 8584.25 IOPS, 33.53 MiB/s [2024-12-11T08:53:00.680Z] 8599.24 IOPS, 33.59 MiB/s [2024-12-11T08:53:00.680Z] 8645.50 IOPS, 33.77 MiB/s [2024-12-11T08:53:00.680Z] 8684.37 IOPS, 33.92 MiB/s [2024-12-11T08:53:00.680Z] 8722.55 IOPS, 34.07 MiB/s [2024-12-11T08:53:00.680Z] 8760.90 IOPS, 34.22 MiB/s [2024-12-11T08:53:00.680Z] 8787.41 IOPS, 34.33 MiB/s [2024-12-11T08:53:00.680Z] [2024-12-11 08:52:25.932084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.906 [2024-12-11 08:52:25.932744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:52.906 [2024-12-11 08:52:25.932763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.932777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.932797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.932811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.932831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.932845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.932866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.932879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.932900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.932913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.932934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.932949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.932970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.932984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.933026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.933063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.933098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.907 [2024-12-11 08:52:25.933895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.933930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.933966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.933986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.907 [2024-12-11 08:52:25.934323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:52.907 [2024-12-11 08:52:25.934345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.934360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.934396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.934433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.934515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.934552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.934962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.934978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.908 [2024-12-11 08:52:25.935015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.908 [2024-12-11 08:52:25.935670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:52.908 [2024-12-11 08:52:25.935694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.935979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.936612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.936840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.936854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.909 [2024-12-11 08:52:25.937691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.937743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.937789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.937833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.937877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.937921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.937966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.937982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.938012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.938027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:52.909 [2024-12-11 08:52:25.938073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.909 [2024-12-11 08:52:25.938092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:52.909 8473.52 IOPS, 33.10 MiB/s [2024-12-11T08:53:00.684Z] 8120.46 IOPS, 31.72 MiB/s [2024-12-11T08:53:00.684Z] 7795.64 IOPS, 30.45 MiB/s [2024-12-11T08:53:00.684Z] 7495.81 IOPS, 29.28 MiB/s [2024-12-11T08:53:00.684Z] 7218.19 IOPS, 28.20 MiB/s [2024-12-11T08:53:00.684Z] 6960.39 IOPS, 27.19 MiB/s [2024-12-11T08:53:00.684Z] 6720.38 IOPS, 26.25 MiB/s [2024-12-11T08:53:00.684Z] 6724.80 IOPS, 26.27 MiB/s [2024-12-11T08:53:00.684Z] 6805.16 IOPS, 26.58 MiB/s [2024-12-11T08:53:00.684Z] 6881.50 IOPS, 26.88 MiB/s [2024-12-11T08:53:00.684Z] 6932.36 IOPS, 27.08 MiB/s [2024-12-11T08:53:00.684Z] 6994.12 IOPS, 27.32 MiB/s [2024-12-11T08:53:00.684Z] 7050.97 IOPS, 27.54 MiB/s [2024-12-11T08:53:00.684Z] [2024-12-11 08:52:39.550254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.550935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.550971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.910 [2024-12-11 08:52:39.551304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.910 [2024-12-11 08:52:39.551798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.910 [2024-12-11 08:52:39.551812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.551826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.551840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.551861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.551875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.551892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.551906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.551921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.551950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.551964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.551978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.551992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.911 [2024-12-11 08:52:39.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.552974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.553002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.553035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.911 [2024-12-11 08:52:39.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.911 [2024-12-11 08:52:39.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.912 [2024-12-11 08:52:39.553596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.553942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.553986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.554001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.554014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.554029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.554042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.554056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.912 [2024-12-11 08:52:39.554070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.554084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95b290 is same with the state(6) to be set 00:18:52.912 [2024-12-11 08:52:39.554099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.912 [2024-12-11 08:52:39.554110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.912 [2024-12-11 08:52:39.554120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64888 len:8 PRP1 0x0 PRP2 0x0 00:18:52.912 [2024-12-11 08:52:39.554133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.912 [2024-12-11 08:52:39.554146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.912 [2024-12-11 08:52:39.554156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65344 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65360 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65384 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65392 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65400 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65408 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65416 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65424 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65432 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65440 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65448 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65456 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.554877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.913 [2024-12-11 08:52:39.554887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.913 [2024-12-11 08:52:39.554897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65464 len:8 PRP1 0x0 PRP2 0x0 00:18:52.913 [2024-12-11 08:52:39.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.555078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.913 [2024-12-11 08:52:39.555115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.555131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.913 [2024-12-11 08:52:39.555185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.555206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.913 [2024-12-11 08:52:39.555221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.555235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.913 [2024-12-11 08:52:39.555249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.555264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.913 [2024-12-11 08:52:39.555278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.913 [2024-12-11 08:52:39.555298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cbe90 is same with the state(6) to be set 00:18:52.913 [2024-12-11 08:52:39.556454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:52.913 [2024-12-11 08:52:39.556494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cbe90 (9): Bad file descriptor 00:18:52.913 [2024-12-11 08:52:39.556901] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.913 [2024-12-11 08:52:39.556934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cbe90 with addr=10.0.0.3, port=4421 00:18:52.913 [2024-12-11 08:52:39.556967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cbe90 is same with the state(6) to be set 00:18:52.913 [2024-12-11 08:52:39.557027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cbe90 (9): Bad file descriptor 00:18:52.913 [2024-12-11 08:52:39.557063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:52.913 [2024-12-11 08:52:39.557079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:52.913 [2024-12-11 08:52:39.557093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:52.913 [2024-12-11 08:52:39.557106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:52.913 [2024-12-11 08:52:39.557121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:52.913 7097.36 IOPS, 27.72 MiB/s [2024-12-11T08:53:00.687Z] 7135.81 IOPS, 27.87 MiB/s [2024-12-11T08:53:00.687Z] 7168.87 IOPS, 28.00 MiB/s [2024-12-11T08:53:00.687Z] 7207.41 IOPS, 28.15 MiB/s [2024-12-11T08:53:00.687Z] 7235.02 IOPS, 28.26 MiB/s [2024-12-11T08:53:00.687Z] 7269.10 IOPS, 28.39 MiB/s [2024-12-11T08:53:00.687Z] 7299.45 IOPS, 28.51 MiB/s [2024-12-11T08:53:00.687Z] 7328.21 IOPS, 28.63 MiB/s [2024-12-11T08:53:00.687Z] 7346.93 IOPS, 28.70 MiB/s [2024-12-11T08:53:00.687Z] 7376.38 IOPS, 28.81 MiB/s [2024-12-11T08:53:00.688Z] [2024-12-11 08:52:49.621552] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:52.914 7404.52 IOPS, 28.92 MiB/s [2024-12-11T08:53:00.688Z] 7429.79 IOPS, 29.02 MiB/s [2024-12-11T08:53:00.688Z] 7455.33 IOPS, 29.12 MiB/s [2024-12-11T08:53:00.688Z] 7478.86 IOPS, 29.21 MiB/s [2024-12-11T08:53:00.688Z] 7500.16 IOPS, 29.30 MiB/s [2024-12-11T08:53:00.688Z] 7511.02 IOPS, 29.34 MiB/s [2024-12-11T08:53:00.688Z] 7522.31 IOPS, 29.38 MiB/s [2024-12-11T08:53:00.688Z] 7529.66 IOPS, 29.41 MiB/s [2024-12-11T08:53:00.688Z] 7545.19 IOPS, 29.47 MiB/s [2024-12-11T08:53:00.688Z] 7566.55 IOPS, 29.56 MiB/s [2024-12-11T08:53:00.688Z] 7583.86 IOPS, 29.62 MiB/s [2024-12-11T08:53:00.688Z] Received shutdown signal, test time was about 56.070289 seconds 00:18:52.914 00:18:52.914 Latency(us) 00:18:52.914 [2024-12-11T08:53:00.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.914 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:52.914 Verification LBA range: start 0x0 length 0x4000 00:18:52.914 Nvme0n1 : 56.07 7583.60 29.62 0.00 0.00 16845.94 785.69 7046430.72 00:18:52.914 [2024-12-11T08:53:00.688Z] =================================================================================================================== 00:18:52.914 [2024-12-11T08:53:00.688Z] Total : 7583.60 29.62 0.00 0.00 16845.94 785.69 7046430.72 00:18:52.914 08:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.914 rmmod nvme_tcp 00:18:52.914 rmmod nvme_fabrics 00:18:52.914 rmmod nvme_keyring 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81399 ']' 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81399 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81399 ']' 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81399 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81399 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.914 killing process with pid 81399 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81399' 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81399 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81399 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:52.914 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:53.173 00:18:53.173 real 1m1.048s 00:18:53.173 user 2m49.960s 00:18:53.173 sys 0m18.000s 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:53.173 ************************************ 00:18:53.173 END TEST nvmf_host_multipath 00:18:53.173 ************************************ 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.173 ************************************ 00:18:53.173 START TEST nvmf_timeout 00:18:53.173 ************************************ 00:18:53.173 08:53:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:53.433 * Looking for test storage... 00:18:53.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:53.433 08:53:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:53.433 08:53:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:18:53.433 08:53:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.433 --rc genhtml_branch_coverage=1 00:18:53.433 --rc genhtml_function_coverage=1 00:18:53.433 --rc genhtml_legend=1 00:18:53.433 --rc geninfo_all_blocks=1 00:18:53.433 --rc geninfo_unexecuted_blocks=1 00:18:53.433 00:18:53.433 ' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.433 --rc genhtml_branch_coverage=1 00:18:53.433 --rc genhtml_function_coverage=1 00:18:53.433 --rc genhtml_legend=1 00:18:53.433 --rc geninfo_all_blocks=1 00:18:53.433 --rc geninfo_unexecuted_blocks=1 00:18:53.433 00:18:53.433 ' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.433 --rc genhtml_branch_coverage=1 00:18:53.433 --rc genhtml_function_coverage=1 00:18:53.433 --rc genhtml_legend=1 00:18:53.433 --rc geninfo_all_blocks=1 00:18:53.433 --rc geninfo_unexecuted_blocks=1 00:18:53.433 00:18:53.433 ' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.433 --rc genhtml_branch_coverage=1 00:18:53.433 --rc genhtml_function_coverage=1 00:18:53.433 --rc genhtml_legend=1 00:18:53.433 --rc geninfo_all_blocks=1 00:18:53.433 --rc geninfo_unexecuted_blocks=1 00:18:53.433 00:18:53.433 ' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.433 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:53.434 Cannot find device "nvmf_init_br" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:53.434 Cannot find device "nvmf_init_br2" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:53.434 Cannot find device "nvmf_tgt_br" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:53.434 Cannot find device "nvmf_tgt_br2" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:53.434 Cannot find device "nvmf_init_br" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:53.434 Cannot find device "nvmf_init_br2" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:53.434 Cannot find device "nvmf_tgt_br" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:53.434 Cannot find device "nvmf_tgt_br2" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:53.434 Cannot find device "nvmf_br" 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:53.434 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:53.693 Cannot find device "nvmf_init_if" 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:53.693 Cannot find device "nvmf_init_if2" 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:53.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:53.693 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:53.693 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:53.693 00:18:53.693 --- 10.0.0.3 ping statistics --- 00:18:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.693 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:53.693 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:53.693 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:18:53.693 00:18:53.693 --- 10.0.0.4 ping statistics --- 00:18:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.693 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:53.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:53.693 00:18:53.693 --- 10.0.0.1 ping statistics --- 00:18:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.693 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:53.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:18:53.693 00:18:53.693 --- 10.0.0.2 ping statistics --- 00:18:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.693 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.693 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82612 00:18:53.694 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82612 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82612 ']' 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.952 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.952 [2024-12-11 08:53:01.517788] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:18:53.953 [2024-12-11 08:53:01.517878] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.953 [2024-12-11 08:53:01.663767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.953 [2024-12-11 08:53:01.696189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.953 [2024-12-11 08:53:01.696252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.953 [2024-12-11 08:53:01.696263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.953 [2024-12-11 08:53:01.696271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.953 [2024-12-11 08:53:01.696278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.953 [2024-12-11 08:53:01.697186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.953 [2024-12-11 08:53:01.697187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.211 [2024-12-11 08:53:01.727943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.211 08:53:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:54.470 [2024-12-11 08:53:02.125852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.470 08:53:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:54.728 Malloc0 00:18:54.729 08:53:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.987 08:53:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.245 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:55.503 [2024-12-11 08:53:03.256600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82659 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82659 /var/tmp/bdevperf.sock 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82659 ']' 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.762 08:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:55.762 [2024-12-11 08:53:03.334868] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:18:55.762 [2024-12-11 08:53:03.334960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82659 ] 00:18:55.762 [2024-12-11 08:53:03.485676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.762 [2024-12-11 08:53:03.524779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.021 [2024-12-11 08:53:03.557600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:56.588 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.588 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:56.588 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:56.846 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:57.418 NVMe0n1 00:18:57.418 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82677 00:18:57.418 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.418 08:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:57.418 Running I/O for 10 seconds... 00:18:58.359 08:53:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:58.621 6804.00 IOPS, 26.58 MiB/s [2024-12-11T08:53:06.395Z] [2024-12-11 08:53:06.228920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.228979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.228991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.228999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.621 [2024-12-11 08:53:06.229470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.229956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c190 is same with the state(6) to be set 00:18:58.622 [2024-12-11 08:53:06.230016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.622 [2024-12-11 08:53:06.230303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.622 [2024-12-11 08:53:06.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.230985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.230994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.623 [2024-12-11 08:53:06.231171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.623 [2024-12-11 08:53:06.231180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.231979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.231988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.232000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.232009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.232021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.624 [2024-12-11 08:53:06.232030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.624 [2024-12-11 08:53:06.232041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.625 [2024-12-11 08:53:06.232733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.625 [2024-12-11 08:53:06.232753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.625 [2024-12-11 08:53:06.232764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1263100 is same with the state(6) to be set 00:18:58.626 [2024-12-11 08:53:06.232777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.626 [2024-12-11 08:53:06.232787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.626 [2024-12-11 08:53:06.232796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:18:58.626 [2024-12-11 08:53:06.232805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.626 [2024-12-11 08:53:06.233083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:58.626 [2024-12-11 08:53:06.233178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5070 (9): Bad file descriptor 00:18:58.626 [2024-12-11 08:53:06.233291] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.626 [2024-12-11 08:53:06.233313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f5070 with addr=10.0.0.3, port=4420 00:18:58.626 [2024-12-11 08:53:06.233325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5070 is same with the state(6) to be set 00:18:58.626 [2024-12-11 08:53:06.233344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5070 (9): Bad file descriptor 00:18:58.626 [2024-12-11 08:53:06.233360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:58.626 [2024-12-11 08:53:06.233369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:58.626 [2024-12-11 08:53:06.233380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:58.626 [2024-12-11 08:53:06.233390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:58.626 [2024-12-11 08:53:06.233401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:58.626 08:53:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:00.497 3986.00 IOPS, 15.57 MiB/s [2024-12-11T08:53:08.271Z] 2657.33 IOPS, 10.38 MiB/s [2024-12-11T08:53:08.271Z] [2024-12-11 08:53:08.233594] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.497 [2024-12-11 08:53:08.233662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f5070 with addr=10.0.0.3, port=4420 00:19:00.497 [2024-12-11 08:53:08.233677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5070 is same with the state(6) to be set 00:19:00.497 [2024-12-11 08:53:08.233702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5070 (9): Bad file descriptor 00:19:00.497 [2024-12-11 08:53:08.233720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:00.497 [2024-12-11 08:53:08.233730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:00.497 [2024-12-11 08:53:08.233740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:00.497 [2024-12-11 08:53:08.233751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:00.497 [2024-12-11 08:53:08.233761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:00.497 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:00.497 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.497 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:01.065 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:01.065 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:01.065 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:01.065 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:01.323 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:01.323 08:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:02.698 1993.00 IOPS, 7.79 MiB/s [2024-12-11T08:53:10.472Z] 1594.40 IOPS, 6.23 MiB/s [2024-12-11T08:53:10.472Z] [2024-12-11 08:53:10.233978] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.698 [2024-12-11 08:53:10.234059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f5070 with addr=10.0.0.3, port=4420 00:19:02.698 [2024-12-11 08:53:10.234074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5070 is same with the state(6) to be set 00:19:02.698 [2024-12-11 08:53:10.234098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5070 (9): Bad file descriptor 00:19:02.698 [2024-12-11 08:53:10.234116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:02.698 [2024-12-11 08:53:10.234126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:02.698 [2024-12-11 08:53:10.234137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:02.698 [2024-12-11 08:53:10.234148] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:02.698 [2024-12-11 08:53:10.234188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:04.567 1328.67 IOPS, 5.19 MiB/s [2024-12-11T08:53:12.341Z] 1138.86 IOPS, 4.45 MiB/s [2024-12-11T08:53:12.341Z] [2024-12-11 08:53:12.234307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:04.567 [2024-12-11 08:53:12.234374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:04.567 [2024-12-11 08:53:12.234387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:04.567 [2024-12-11 08:53:12.234397] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:04.567 [2024-12-11 08:53:12.234408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:05.508 996.50 IOPS, 3.89 MiB/s 00:19:05.508 Latency(us) 00:19:05.508 [2024-12-11T08:53:13.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.508 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.508 Verification LBA range: start 0x0 length 0x4000 00:19:05.508 NVMe0n1 : 8.20 972.63 3.80 15.62 0.00 129375.72 3902.37 7046430.72 00:19:05.508 [2024-12-11T08:53:13.282Z] =================================================================================================================== 00:19:05.508 [2024-12-11T08:53:13.282Z] Total : 972.63 3.80 15.62 0.00 129375.72 3902.37 7046430.72 00:19:05.508 { 00:19:05.508 "results": [ 00:19:05.508 { 00:19:05.508 "job": "NVMe0n1", 00:19:05.508 "core_mask": "0x4", 00:19:05.508 "workload": "verify", 00:19:05.508 "status": "finished", 00:19:05.508 "verify_range": { 00:19:05.508 "start": 0, 00:19:05.508 "length": 16384 00:19:05.508 }, 00:19:05.508 "queue_depth": 128, 00:19:05.508 "io_size": 4096, 00:19:05.508 "runtime": 8.196366, 00:19:05.508 "iops": 972.6261613988443, 00:19:05.508 "mibps": 3.7993209429642354, 00:19:05.508 "io_failed": 128, 00:19:05.508 "io_timeout": 0, 00:19:05.508 "avg_latency_us": 129375.72273849606, 00:19:05.508 "min_latency_us": 3902.370909090909, 00:19:05.508 "max_latency_us": 7046430.72 00:19:05.508 } 00:19:05.508 ], 00:19:05.508 "core_count": 1 00:19:05.508 } 00:19:06.137 08:53:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:06.137 08:53:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:06.137 08:53:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:06.395 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:06.395 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:06.395 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:06.395 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82677 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82659 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82659 ']' 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82659 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.652 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82659 00:19:06.930 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:06.930 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:06.930 killing process with pid 82659 00:19:06.930 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82659' 00:19:06.930 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82659 00:19:06.930 Received shutdown signal, test time was about 9.396768 seconds 00:19:06.930 00:19:06.930 Latency(us) 00:19:06.930 [2024-12-11T08:53:14.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.930 [2024-12-11T08:53:14.704Z] =================================================================================================================== 00:19:06.930 [2024-12-11T08:53:14.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.930 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82659 00:19:06.930 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:07.188 [2024-12-11 08:53:14.861613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82805 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82805 /var/tmp/bdevperf.sock 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82805 ']' 00:19:07.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.188 08:53:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.188 [2024-12-11 08:53:14.934628] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:19:07.188 [2024-12-11 08:53:14.935198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82805 ] 00:19:07.448 [2024-12-11 08:53:15.080006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.448 [2024-12-11 08:53:15.112353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.448 [2024-12-11 08:53:15.141505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.383 08:53:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.383 08:53:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:08.383 08:53:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:08.383 08:53:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:08.950 NVMe0n1 00:19:08.950 08:53:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82830 00:19:08.950 08:53:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:08.950 08:53:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.950 Running I/O for 10 seconds... 00:19:09.886 08:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:10.148 7076.00 IOPS, 27.64 MiB/s [2024-12-11T08:53:17.922Z] [2024-12-11 08:53:17.733051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.148 [2024-12-11 08:53:17.733817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.148 [2024-12-11 08:53:17.733949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.148 [2024-12-11 08:53:17.733958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.733969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.149 [2024-12-11 08:53:17.733978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.733989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.149 [2024-12-11 08:53:17.734772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.149 [2024-12-11 08:53:17.734781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.734984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.734995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.150 [2024-12-11 08:53:17.735625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.150 [2024-12-11 08:53:17.735636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.151 [2024-12-11 08:53:17.735786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.735796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183e100 is same with the state(6) to be set 00:19:10.151 [2024-12-11 08:53:17.735809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.151 [2024-12-11 08:53:17.735817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.151 [2024-12-11 08:53:17.735825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65552 len:8 PRP1 0x0 PRP2 0x0 00:19:10.151 [2024-12-11 08:53:17.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.151 [2024-12-11 08:53:17.736126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:10.151 [2024-12-11 08:53:17.736216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:10.151 [2024-12-11 08:53:17.736327] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.151 [2024-12-11 08:53:17.736357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d0070 with addr=10.0.0.3, port=4420 00:19:10.151 [2024-12-11 08:53:17.736369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(6) to be set 00:19:10.151 [2024-12-11 08:53:17.736387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:10.151 [2024-12-11 08:53:17.736402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:10.151 [2024-12-11 08:53:17.736412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:10.151 [2024-12-11 08:53:17.736422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:10.151 [2024-12-11 08:53:17.736432] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:10.151 [2024-12-11 08:53:17.736442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:10.151 08:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:11.088 4042.50 IOPS, 15.79 MiB/s [2024-12-11T08:53:18.862Z] [2024-12-11 08:53:18.736558] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.088 [2024-12-11 08:53:18.736647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d0070 with addr=10.0.0.3, port=4420 00:19:11.088 [2024-12-11 08:53:18.736663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(6) to be set 00:19:11.088 [2024-12-11 08:53:18.736687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:11.088 [2024-12-11 08:53:18.736719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:11.088 [2024-12-11 08:53:18.736732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:11.088 [2024-12-11 08:53:18.736743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:11.088 [2024-12-11 08:53:18.736754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:11.088 [2024-12-11 08:53:18.736766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:11.088 08:53:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:11.347 [2024-12-11 08:53:19.008615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:11.347 08:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82830 00:19:12.173 2695.00 IOPS, 10.53 MiB/s [2024-12-11T08:53:19.947Z] [2024-12-11 08:53:19.752948] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:14.047 2021.25 IOPS, 7.90 MiB/s [2024-12-11T08:53:22.757Z] 2824.40 IOPS, 11.03 MiB/s [2024-12-11T08:53:23.693Z] 3548.17 IOPS, 13.86 MiB/s [2024-12-11T08:53:24.630Z] 4065.29 IOPS, 15.88 MiB/s [2024-12-11T08:53:26.006Z] 4453.12 IOPS, 17.40 MiB/s [2024-12-11T08:53:26.943Z] 4742.33 IOPS, 18.52 MiB/s 00:19:19.169 Latency(us) 00:19:19.169 [2024-12-11T08:53:26.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.169 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.169 Verification LBA range: start 0x0 length 0x4000 00:19:19.169 NVMe0n1 : 10.00 4968.94 19.41 0.00 0.00 25714.19 2249.08 3019898.88 00:19:19.169 [2024-12-11T08:53:26.943Z] =================================================================================================================== 00:19:19.169 [2024-12-11T08:53:26.943Z] Total : 4968.94 19.41 0.00 0.00 25714.19 2249.08 3019898.88 00:19:19.169 { 00:19:19.169 "results": [ 00:19:19.169 { 00:19:19.169 "job": "NVMe0n1", 00:19:19.169 "core_mask": "0x4", 00:19:19.169 "workload": "verify", 00:19:19.169 "status": "finished", 00:19:19.169 "verify_range": { 00:19:19.169 "start": 0, 00:19:19.169 "length": 16384 00:19:19.169 }, 00:19:19.169 "queue_depth": 128, 00:19:19.169 "io_size": 4096, 00:19:19.169 "runtime": 10.003346, 00:19:19.169 "iops": 4968.937393548119, 00:19:19.169 "mibps": 19.40991169354734, 00:19:19.169 "io_failed": 0, 00:19:19.169 "io_timeout": 0, 00:19:19.169 "avg_latency_us": 25714.18677064777, 00:19:19.169 "min_latency_us": 2249.0763636363636, 00:19:19.169 "max_latency_us": 3019898.88 00:19:19.169 } 00:19:19.169 ], 00:19:19.169 "core_count": 1 00:19:19.169 } 00:19:19.169 08:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82935 00:19:19.169 08:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.169 08:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:19.169 Running I/O for 10 seconds... 00:19:20.175 08:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.175 6820.00 IOPS, 26.64 MiB/s [2024-12-11T08:53:27.949Z] [2024-12-11 08:53:27.887780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.887982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.887991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.175 [2024-12-11 08:53:27.888359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.175 [2024-12-11 08:53:27.888370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.888961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.176 [2024-12-11 08:53:27.888981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.888992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.889001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.889012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.889021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.889033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.889042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.889053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.889062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.889073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.889082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.176 [2024-12-11 08:53:27.889094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.176 [2024-12-11 08:53:27.889103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.177 [2024-12-11 08:53:27.889843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.177 [2024-12-11 08:53:27.889852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.889983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.889992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.178 [2024-12-11 08:53:27.890449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ee10 is same with the state(6) to be set 00:19:20.178 [2024-12-11 08:53:27.890472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.178 [2024-12-11 08:53:27.890480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.178 [2024-12-11 08:53:27.890488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 00:19:20.178 [2024-12-11 08:53:27.890497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.178 [2024-12-11 08:53:27.890772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:20.178 [2024-12-11 08:53:27.890858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:20.178 [2024-12-11 08:53:27.890962] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.178 [2024-12-11 08:53:27.890993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d0070 with addr=10.0.0.3, port=4420 00:19:20.178 [2024-12-11 08:53:27.891005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(6) to be set 00:19:20.178 [2024-12-11 08:53:27.891023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:20.178 [2024-12-11 08:53:27.891038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:20.178 [2024-12-11 08:53:27.891048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:20.178 [2024-12-11 08:53:27.891058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:20.178 [2024-12-11 08:53:27.891068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:20.178 [2024-12-11 08:53:27.891079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:20.178 08:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:21.374 3914.50 IOPS, 15.29 MiB/s [2024-12-11T08:53:29.148Z] [2024-12-11 08:53:28.891222] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.374 [2024-12-11 08:53:28.891290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d0070 with addr=10.0.0.3, port=4420 00:19:21.375 [2024-12-11 08:53:28.891306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(6) to be set 00:19:21.375 [2024-12-11 08:53:28.891338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:21.375 [2024-12-11 08:53:28.891357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:21.375 [2024-12-11 08:53:28.891367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:21.375 [2024-12-11 08:53:28.891377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:21.375 [2024-12-11 08:53:28.891388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:21.375 [2024-12-11 08:53:28.891399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:22.312 2609.67 IOPS, 10.19 MiB/s [2024-12-11T08:53:30.086Z] [2024-12-11 08:53:29.891510] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.312 [2024-12-11 08:53:29.891690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d0070 with addr=10.0.0.3, port=4420 00:19:22.312 [2024-12-11 08:53:29.891835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(6) to be set 00:19:22.312 [2024-12-11 08:53:29.891911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:22.312 [2024-12-11 08:53:29.892061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:22.312 [2024-12-11 08:53:29.892124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:22.312 [2024-12-11 08:53:29.892292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:22.312 [2024-12-11 08:53:29.892395] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:22.312 [2024-12-11 08:53:29.892463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:23.249 1957.25 IOPS, 7.65 MiB/s [2024-12-11T08:53:31.023Z] [2024-12-11 08:53:30.895372] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.249 [2024-12-11 08:53:30.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d0070 with addr=10.0.0.3, port=4420 00:19:23.249 [2024-12-11 08:53:30.895456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(6) to be set 00:19:23.249 [2024-12-11 08:53:30.895737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:19:23.249 [2024-12-11 08:53:30.895973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:23.249 [2024-12-11 08:53:30.895984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:23.249 [2024-12-11 08:53:30.895995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:23.249 [2024-12-11 08:53:30.896005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:23.249 [2024-12-11 08:53:30.896015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:23.249 08:53:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:23.507 [2024-12-11 08:53:31.187024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:23.507 08:53:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82935 00:19:24.333 1565.80 IOPS, 6.12 MiB/s [2024-12-11T08:53:32.107Z] [2024-12-11 08:53:31.918539] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:26.205 2606.00 IOPS, 10.18 MiB/s [2024-12-11T08:53:34.915Z] 3582.71 IOPS, 13.99 MiB/s [2024-12-11T08:53:35.849Z] 4340.50 IOPS, 16.96 MiB/s [2024-12-11T08:53:36.785Z] 4915.56 IOPS, 19.20 MiB/s [2024-12-11T08:53:36.785Z] 5367.60 IOPS, 20.97 MiB/s 00:19:29.011 Latency(us) 00:19:29.011 [2024-12-11T08:53:36.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.011 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.011 Verification LBA range: start 0x0 length 0x4000 00:19:29.011 NVMe0n1 : 10.01 5376.48 21.00 3560.40 0.00 14293.85 755.90 3019898.88 00:19:29.011 [2024-12-11T08:53:36.785Z] =================================================================================================================== 00:19:29.011 [2024-12-11T08:53:36.785Z] Total : 5376.48 21.00 3560.40 0.00 14293.85 0.00 3019898.88 00:19:29.011 { 00:19:29.011 "results": [ 00:19:29.011 { 00:19:29.011 "job": "NVMe0n1", 00:19:29.011 "core_mask": "0x4", 00:19:29.011 "workload": "verify", 00:19:29.011 "status": "finished", 00:19:29.011 "verify_range": { 00:19:29.011 "start": 0, 00:19:29.011 "length": 16384 00:19:29.011 }, 00:19:29.011 "queue_depth": 128, 00:19:29.011 "io_size": 4096, 00:19:29.011 "runtime": 10.007291, 00:19:29.011 "iops": 5376.480008425857, 00:19:29.011 "mibps": 21.001875032913503, 00:19:29.011 "io_failed": 35630, 00:19:29.011 "io_timeout": 0, 00:19:29.011 "avg_latency_us": 14293.850825413154, 00:19:29.011 "min_latency_us": 755.8981818181818, 00:19:29.011 "max_latency_us": 3019898.88 00:19:29.011 } 00:19:29.011 ], 00:19:29.011 "core_count": 1 00:19:29.011 } 00:19:29.011 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82805 00:19:29.011 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82805 ']' 00:19:29.011 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82805 00:19:29.011 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:29.011 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.011 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82805 00:19:29.270 killing process with pid 82805 00:19:29.270 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.270 00:19:29.270 Latency(us) 00:19:29.270 [2024-12-11T08:53:37.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.270 [2024-12-11T08:53:37.044Z] =================================================================================================================== 00:19:29.270 [2024-12-11T08:53:37.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82805' 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82805 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82805 00:19:29.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=83049 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 83049 /var/tmp/bdevperf.sock 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83049 ']' 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.270 08:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:29.270 [2024-12-11 08:53:37.004425] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:19:29.270 [2024-12-11 08:53:37.005334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83049 ] 00:19:29.529 [2024-12-11 08:53:37.154106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.529 [2024-12-11 08:53:37.187458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.529 [2024-12-11 08:53:37.216923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:30.466 08:53:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.466 08:53:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:30.466 08:53:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:30.466 08:53:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=83065 00:19:30.466 08:53:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:30.724 08:53:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:30.983 NVMe0n1 00:19:30.983 08:53:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=83111 00:19:30.983 08:53:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.983 08:53:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:31.243 Running I/O for 10 seconds... 00:19:32.179 08:53:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:32.440 14859.00 IOPS, 58.04 MiB/s [2024-12-11T08:53:40.214Z] [2024-12-11 08:53:39.966763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.440 [2024-12-11 08:53:39.966812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.440 [2024-12-11 08:53:39.966841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.440 [2024-12-11 08:53:39.966851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.440 [2024-12-11 08:53:39.966860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.440 [2024-12-11 08:53:39.966868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.966995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.441 [2024-12-11 08:53:39.967644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.967936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eb50 is same with the state(6) to be set 00:19:32.442 [2024-12-11 08:53:39.968011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.442 [2024-12-11 08:53:39.968721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.442 [2024-12-11 08:53:39.968732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.968984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.968995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.969757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.969990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.970311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.970510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.970727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.970879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.970910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.970930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.970951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.970970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.970989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.443 [2024-12-11 08:53:39.971327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.443 [2024-12-11 08:53:39.971339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.971973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.971999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.972007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.972018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.972027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.972037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.972046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.972056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.972064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.972075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.972084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.444 [2024-12-11 08:53:39.972095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.444 [2024-12-11 08:53:39.972103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.445 [2024-12-11 08:53:39.972649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.972677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114f100 is same with the state(6) to be set 00:19:32.445 [2024-12-11 08:53:39.972690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.445 [2024-12-11 08:53:39.972698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.445 [2024-12-11 08:53:39.972706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:19:32.445 [2024-12-11 08:53:39.972715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.445 [2024-12-11 08:53:39.973068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:32.445 [2024-12-11 08:53:39.973178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e1070 (9): Bad file descriptor 00:19:32.445 [2024-12-11 08:53:39.973285] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:32.445 [2024-12-11 08:53:39.973306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e1070 with addr=10.0.0.3, port=4420 00:19:32.445 [2024-12-11 08:53:39.973317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e1070 is same with the state(6) to be set 00:19:32.445 [2024-12-11 08:53:39.973337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e1070 (9): Bad file descriptor 00:19:32.445 [2024-12-11 08:53:39.973352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:32.445 [2024-12-11 08:53:39.973362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:32.445 [2024-12-11 08:53:39.973372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:32.445 [2024-12-11 08:53:39.973382] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:32.445 [2024-12-11 08:53:39.973392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:32.446 08:53:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 83111 00:19:34.318 8383.00 IOPS, 32.75 MiB/s [2024-12-11T08:53:42.092Z] 5588.67 IOPS, 21.83 MiB/s [2024-12-11T08:53:42.092Z] [2024-12-11 08:53:41.973576] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:34.318 [2024-12-11 08:53:41.973786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e1070 with addr=10.0.0.3, port=4420 00:19:34.318 [2024-12-11 08:53:41.973987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e1070 is same with the state(6) to be set 00:19:34.318 [2024-12-11 08:53:41.974232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e1070 (9): Bad file descriptor 00:19:34.318 [2024-12-11 08:53:41.974523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:34.318 [2024-12-11 08:53:41.974759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:34.318 [2024-12-11 08:53:41.974998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:34.318 [2024-12-11 08:53:41.975212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:34.318 [2024-12-11 08:53:41.975472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:36.216 4191.50 IOPS, 16.37 MiB/s [2024-12-11T08:53:43.990Z] 3353.20 IOPS, 13.10 MiB/s [2024-12-11T08:53:43.990Z] [2024-12-11 08:53:43.975871] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.216 [2024-12-11 08:53:43.976096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e1070 with addr=10.0.0.3, port=4420 00:19:36.216 [2024-12-11 08:53:43.976420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e1070 is same with the state(6) to be set 00:19:36.216 [2024-12-11 08:53:43.976636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e1070 (9): Bad file descriptor 00:19:36.216 [2024-12-11 08:53:43.976892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:36.216 [2024-12-11 08:53:43.976922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:36.216 [2024-12-11 08:53:43.976941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:36.216 [2024-12-11 08:53:43.976954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:36.216 [2024-12-11 08:53:43.976973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:38.086 2794.33 IOPS, 10.92 MiB/s [2024-12-11T08:53:46.118Z] 2395.14 IOPS, 9.36 MiB/s [2024-12-11T08:53:46.118Z] [2024-12-11 08:53:45.977066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:38.344 [2024-12-11 08:53:45.977108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:38.344 [2024-12-11 08:53:45.977136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:38.344 [2024-12-11 08:53:45.977146] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:38.344 [2024-12-11 08:53:45.977175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:39.280 2095.75 IOPS, 8.19 MiB/s 00:19:39.280 Latency(us) 00:19:39.280 [2024-12-11T08:53:47.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.280 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:39.280 NVMe0n1 : 8.15 2058.04 8.04 15.71 0.00 61607.24 7685.59 7015926.69 00:19:39.280 [2024-12-11T08:53:47.055Z] =================================================================================================================== 00:19:39.281 [2024-12-11T08:53:47.055Z] Total : 2058.04 8.04 15.71 0.00 61607.24 7685.59 7015926.69 00:19:39.281 { 00:19:39.281 "results": [ 00:19:39.281 { 00:19:39.281 "job": "NVMe0n1", 00:19:39.281 "core_mask": "0x4", 00:19:39.281 "workload": "randread", 00:19:39.281 "status": "finished", 00:19:39.281 "queue_depth": 128, 00:19:39.281 "io_size": 4096, 00:19:39.281 "runtime": 8.146585, 00:19:39.281 "iops": 2058.040270861962, 00:19:39.281 "mibps": 8.03921980805454, 00:19:39.281 "io_failed": 128, 00:19:39.281 "io_timeout": 0, 00:19:39.281 "avg_latency_us": 61607.23963106859, 00:19:39.281 "min_latency_us": 7685.585454545455, 00:19:39.281 "max_latency_us": 7015926.69090909 00:19:39.281 } 00:19:39.281 ], 00:19:39.281 "core_count": 1 00:19:39.281 } 00:19:39.281 08:53:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:39.281 Attaching 5 probes... 00:19:39.281 1467.068078: reset bdev controller NVMe0 00:19:39.281 1467.232234: reconnect bdev controller NVMe0 00:19:39.281 3467.465145: reconnect delay bdev controller NVMe0 00:19:39.281 3467.500134: reconnect bdev controller NVMe0 00:19:39.281 5469.758004: reconnect delay bdev controller NVMe0 00:19:39.281 5469.775904: reconnect bdev controller NVMe0 00:19:39.281 7471.036654: reconnect delay bdev controller NVMe0 00:19:39.281 7471.071900: reconnect bdev controller NVMe0 00:19:39.281 08:53:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 83065 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 83049 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83049 ']' 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83049 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83049 00:19:39.281 killing process with pid 83049 00:19:39.281 Received shutdown signal, test time was about 8.216955 seconds 00:19:39.281 00:19:39.281 Latency(us) 00:19:39.281 [2024-12-11T08:53:47.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.281 [2024-12-11T08:53:47.055Z] =================================================================================================================== 00:19:39.281 [2024-12-11T08:53:47.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83049' 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83049 00:19:39.281 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83049 00:19:39.539 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:39.798 rmmod nvme_tcp 00:19:39.798 rmmod nvme_fabrics 00:19:39.798 rmmod nvme_keyring 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82612 ']' 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82612 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82612 ']' 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82612 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82612 00:19:39.798 killing process with pid 82612 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82612' 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82612 00:19:39.798 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82612 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:40.057 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:40.316 00:19:40.316 real 0m47.093s 00:19:40.316 user 2m19.124s 00:19:40.316 sys 0m5.401s 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.316 08:53:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.316 ************************************ 00:19:40.316 END TEST nvmf_timeout 00:19:40.316 ************************************ 00:19:40.316 08:53:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:40.316 08:53:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:40.316 00:19:40.316 real 4m58.392s 00:19:40.316 user 13m6.076s 00:19:40.316 sys 1m5.678s 00:19:40.316 08:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.316 08:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.316 ************************************ 00:19:40.316 END TEST nvmf_host 00:19:40.316 ************************************ 00:19:40.316 08:53:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:40.316 08:53:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:40.316 00:19:40.316 real 12m25.971s 00:19:40.316 user 30m5.262s 00:19:40.316 sys 3m4.105s 00:19:40.316 08:53:48 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.316 ************************************ 00:19:40.316 08:53:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.316 END TEST nvmf_tcp 00:19:40.316 ************************************ 00:19:40.575 08:53:48 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:19:40.575 08:53:48 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:40.575 08:53:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.575 08:53:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.575 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:40.575 ************************************ 00:19:40.575 START TEST nvmf_dif 00:19:40.575 ************************************ 00:19:40.575 08:53:48 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:40.575 * Looking for test storage... 00:19:40.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:40.575 08:53:48 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.575 08:53:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.575 08:53:48 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.575 08:53:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.575 08:53:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.576 --rc genhtml_branch_coverage=1 00:19:40.576 --rc genhtml_function_coverage=1 00:19:40.576 --rc genhtml_legend=1 00:19:40.576 --rc geninfo_all_blocks=1 00:19:40.576 --rc geninfo_unexecuted_blocks=1 00:19:40.576 00:19:40.576 ' 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.576 --rc genhtml_branch_coverage=1 00:19:40.576 --rc genhtml_function_coverage=1 00:19:40.576 --rc genhtml_legend=1 00:19:40.576 --rc geninfo_all_blocks=1 00:19:40.576 --rc geninfo_unexecuted_blocks=1 00:19:40.576 00:19:40.576 ' 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.576 --rc genhtml_branch_coverage=1 00:19:40.576 --rc genhtml_function_coverage=1 00:19:40.576 --rc genhtml_legend=1 00:19:40.576 --rc geninfo_all_blocks=1 00:19:40.576 --rc geninfo_unexecuted_blocks=1 00:19:40.576 00:19:40.576 ' 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.576 --rc genhtml_branch_coverage=1 00:19:40.576 --rc genhtml_function_coverage=1 00:19:40.576 --rc genhtml_legend=1 00:19:40.576 --rc geninfo_all_blocks=1 00:19:40.576 --rc geninfo_unexecuted_blocks=1 00:19:40.576 00:19:40.576 ' 00:19:40.576 08:53:48 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.576 08:53:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.576 08:53:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.576 08:53:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.576 08:53:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.576 08:53:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:40.576 08:53:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.576 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.576 08:53:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:40.576 08:53:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:40.576 08:53:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:40.576 08:53:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:40.576 08:53:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:40.576 08:53:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:40.576 Cannot find device "nvmf_init_br" 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:40.576 Cannot find device "nvmf_init_br2" 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:40.576 08:53:48 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:40.835 Cannot find device "nvmf_tgt_br" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.835 Cannot find device "nvmf_tgt_br2" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:40.835 Cannot find device "nvmf_init_br" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:40.835 Cannot find device "nvmf_init_br2" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:40.835 Cannot find device "nvmf_tgt_br" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:40.835 Cannot find device "nvmf_tgt_br2" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:40.835 Cannot find device "nvmf_br" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:40.835 Cannot find device "nvmf_init_if" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:40.835 Cannot find device "nvmf_init_if2" 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:40.835 08:53:48 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:40.836 08:53:48 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:41.095 00:19:41.095 --- 10.0.0.3 ping statistics --- 00:19:41.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.095 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:19:41.095 00:19:41.095 --- 10.0.0.4 ping statistics --- 00:19:41.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.095 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:41.095 00:19:41.095 --- 10.0.0.1 ping statistics --- 00:19:41.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.095 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:19:41.095 00:19:41.095 --- 10.0.0.2 ping statistics --- 00:19:41.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.095 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:41.095 08:53:48 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:41.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.354 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.354 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.354 08:53:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:41.354 08:53:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83601 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:41.354 08:53:49 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83601 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83601 ']' 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.354 08:53:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.354 [2024-12-11 08:53:49.120278] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:19:41.354 [2024-12-11 08:53:49.120389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.612 [2024-12-11 08:53:49.267893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.612 [2024-12-11 08:53:49.305725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.612 [2024-12-11 08:53:49.305788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.612 [2024-12-11 08:53:49.305813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.612 [2024-12-11 08:53:49.305822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.612 [2024-12-11 08:53:49.305831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.612 [2024-12-11 08:53:49.306226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.612 [2024-12-11 08:53:49.339984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:19:41.872 08:53:49 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 08:53:49 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.872 08:53:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:41.872 08:53:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 [2024-12-11 08:53:49.478934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.872 08:53:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 ************************************ 00:19:41.872 START TEST fio_dif_1_default 00:19:41.872 ************************************ 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 bdev_null0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:41.872 [2024-12-11 08:53:49.523089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:41.872 { 00:19:41.872 "params": { 00:19:41.872 "name": "Nvme$subsystem", 00:19:41.872 "trtype": "$TEST_TRANSPORT", 00:19:41.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.872 "adrfam": "ipv4", 00:19:41.872 "trsvcid": "$NVMF_PORT", 00:19:41.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.872 "hdgst": ${hdgst:-false}, 00:19:41.872 "ddgst": ${ddgst:-false} 00:19:41.872 }, 00:19:41.872 "method": "bdev_nvme_attach_controller" 00:19:41.872 } 00:19:41.872 EOF 00:19:41.872 )") 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:41.872 "params": { 00:19:41.872 "name": "Nvme0", 00:19:41.872 "trtype": "tcp", 00:19:41.872 "traddr": "10.0.0.3", 00:19:41.872 "adrfam": "ipv4", 00:19:41.872 "trsvcid": "4420", 00:19:41.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:41.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:41.872 "hdgst": false, 00:19:41.872 "ddgst": false 00:19:41.872 }, 00:19:41.872 "method": "bdev_nvme_attach_controller" 00:19:41.872 }' 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:41.872 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:41.873 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:41.873 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:41.873 08:53:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.132 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:42.132 fio-3.35 00:19:42.132 Starting 1 thread 00:19:54.346 00:19:54.346 filename0: (groupid=0, jobs=1): err= 0: pid=83660: Wed Dec 11 08:54:00 2024 00:19:54.346 read: IOPS=8721, BW=34.1MiB/s (35.7MB/s)(341MiB/10001msec) 00:19:54.346 slat (usec): min=6, max=203, avg= 8.64, stdev= 3.85 00:19:54.346 clat (usec): min=353, max=3953, avg=433.17, stdev=48.43 00:19:54.346 lat (usec): min=360, max=3974, avg=441.81, stdev=49.11 00:19:54.346 clat percentiles (usec): 00:19:54.346 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 404], 00:19:54.346 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 437], 00:19:54.346 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 498], 00:19:54.346 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 676], 99.95th=[ 848], 00:19:54.346 | 99.99th=[ 1631] 00:19:54.346 bw ( KiB/s): min=33856, max=36384, per=100.00%, avg=34910.32, stdev=695.09, samples=19 00:19:54.346 iops : min= 8464, max= 9096, avg=8727.58, stdev=173.77, samples=19 00:19:54.346 lat (usec) : 500=95.73%, 750=4.20%, 1000=0.05% 00:19:54.346 lat (msec) : 2=0.01%, 4=0.01% 00:19:54.346 cpu : usr=82.86%, sys=15.10%, ctx=105, majf=0, minf=9 00:19:54.346 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.346 issued rwts: total=87224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.346 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:54.346 00:19:54.346 Run status group 0 (all jobs): 00:19:54.346 READ: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=341MiB (357MB), run=10001-10001msec 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 00:19:54.346 real 0m10.917s 00:19:54.346 user 0m8.860s 00:19:54.346 sys 0m1.749s 00:19:54.346 ************************************ 00:19:54.346 END TEST fio_dif_1_default 00:19:54.346 ************************************ 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:54.346 08:54:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.346 08:54:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 ************************************ 00:19:54.346 START TEST fio_dif_1_multi_subsystems 00:19:54.346 ************************************ 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 bdev_null0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 [2024-12-11 08:54:00.493285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 bdev_null1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:19:54.346 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:54.347 { 00:19:54.347 "params": { 00:19:54.347 "name": "Nvme$subsystem", 00:19:54.347 "trtype": "$TEST_TRANSPORT", 00:19:54.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.347 "adrfam": "ipv4", 00:19:54.347 "trsvcid": "$NVMF_PORT", 00:19:54.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.347 "hdgst": ${hdgst:-false}, 00:19:54.347 "ddgst": ${ddgst:-false} 00:19:54.347 }, 00:19:54.347 "method": "bdev_nvme_attach_controller" 00:19:54.347 } 00:19:54.347 EOF 00:19:54.347 )") 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:54.347 { 00:19:54.347 "params": { 00:19:54.347 "name": "Nvme$subsystem", 00:19:54.347 "trtype": "$TEST_TRANSPORT", 00:19:54.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.347 "adrfam": "ipv4", 00:19:54.347 "trsvcid": "$NVMF_PORT", 00:19:54.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.347 "hdgst": ${hdgst:-false}, 00:19:54.347 "ddgst": ${ddgst:-false} 00:19:54.347 }, 00:19:54.347 "method": "bdev_nvme_attach_controller" 00:19:54.347 } 00:19:54.347 EOF 00:19:54.347 )") 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:54.347 "params": { 00:19:54.347 "name": "Nvme0", 00:19:54.347 "trtype": "tcp", 00:19:54.347 "traddr": "10.0.0.3", 00:19:54.347 "adrfam": "ipv4", 00:19:54.347 "trsvcid": "4420", 00:19:54.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:54.347 "hdgst": false, 00:19:54.347 "ddgst": false 00:19:54.347 }, 00:19:54.347 "method": "bdev_nvme_attach_controller" 00:19:54.347 },{ 00:19:54.347 "params": { 00:19:54.347 "name": "Nvme1", 00:19:54.347 "trtype": "tcp", 00:19:54.347 "traddr": "10.0.0.3", 00:19:54.347 "adrfam": "ipv4", 00:19:54.347 "trsvcid": "4420", 00:19:54.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.347 "hdgst": false, 00:19:54.347 "ddgst": false 00:19:54.347 }, 00:19:54.347 "method": "bdev_nvme_attach_controller" 00:19:54.347 }' 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:54.347 08:54:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.347 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:54.347 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:54.347 fio-3.35 00:19:54.347 Starting 2 threads 00:20:04.323 00:20:04.323 filename0: (groupid=0, jobs=1): err= 0: pid=83820: Wed Dec 11 08:54:11 2024 00:20:04.323 read: IOPS=4663, BW=18.2MiB/s (19.1MB/s)(182MiB/10001msec) 00:20:04.323 slat (nsec): min=7030, max=59392, avg=14236.37, stdev=4591.16 00:20:04.323 clat (usec): min=627, max=2008, avg=819.10, stdev=59.55 00:20:04.323 lat (usec): min=634, max=2039, avg=833.34, stdev=60.73 00:20:04.323 clat percentiles (usec): 00:20:04.323 | 1.00th=[ 685], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 766], 00:20:04.323 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:20:04.323 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 914], 00:20:04.323 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1319], 00:20:04.323 | 99.99th=[ 1483] 00:20:04.323 bw ( KiB/s): min=17824, max=19392, per=50.02%, avg=18664.42, stdev=372.85, samples=19 00:20:04.323 iops : min= 4456, max= 4848, avg=4666.11, stdev=93.21, samples=19 00:20:04.323 lat (usec) : 750=11.60%, 1000=88.28% 00:20:04.323 lat (msec) : 2=0.11%, 4=0.01% 00:20:04.323 cpu : usr=90.00%, sys=8.59%, ctx=10, majf=0, minf=0 00:20:04.323 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:04.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.323 issued rwts: total=46644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.323 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:04.323 filename1: (groupid=0, jobs=1): err= 0: pid=83821: Wed Dec 11 08:54:11 2024 00:20:04.323 read: IOPS=4664, BW=18.2MiB/s (19.1MB/s)(182MiB/10001msec) 00:20:04.323 slat (nsec): min=7087, max=60113, avg=14391.60, stdev=4687.00 00:20:04.323 clat (usec): min=529, max=1993, avg=818.22, stdev=50.96 00:20:04.323 lat (usec): min=537, max=2020, avg=832.61, stdev=51.49 00:20:04.323 clat percentiles (usec): 00:20:04.323 | 1.00th=[ 717], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 775], 00:20:04.323 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:20:04.323 | 70.00th=[ 848], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 898], 00:20:04.323 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 1254], 00:20:04.323 | 99.99th=[ 1483] 00:20:04.323 bw ( KiB/s): min=17824, max=19392, per=50.03%, avg=18666.11, stdev=373.50, samples=19 00:20:04.323 iops : min= 4456, max= 4848, avg=4666.53, stdev=93.38, samples=19 00:20:04.323 lat (usec) : 750=8.03%, 1000=91.91% 00:20:04.323 lat (msec) : 2=0.06% 00:20:04.323 cpu : usr=89.83%, sys=8.76%, ctx=13, majf=0, minf=0 00:20:04.323 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:04.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.323 issued rwts: total=46648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.323 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:04.323 00:20:04.323 Run status group 0 (all jobs): 00:20:04.323 READ: bw=36.4MiB/s (38.2MB/s), 18.2MiB/s-18.2MiB/s (19.1MB/s-19.1MB/s), io=364MiB (382MB), run=10001-10001msec 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.323 00:20:04.323 real 0m11.064s 00:20:04.323 user 0m18.745s 00:20:04.323 sys 0m1.974s 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.323 08:54:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 ************************************ 00:20:04.323 END TEST fio_dif_1_multi_subsystems 00:20:04.323 ************************************ 00:20:04.323 08:54:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:04.323 08:54:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.323 08:54:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.323 08:54:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 ************************************ 00:20:04.323 START TEST fio_dif_rand_params 00:20:04.323 ************************************ 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:04.323 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 bdev_null0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 [2024-12-11 08:54:11.612259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.324 { 00:20:04.324 "params": { 00:20:04.324 "name": "Nvme$subsystem", 00:20:04.324 "trtype": "$TEST_TRANSPORT", 00:20:04.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.324 "adrfam": "ipv4", 00:20:04.324 "trsvcid": "$NVMF_PORT", 00:20:04.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.324 "hdgst": ${hdgst:-false}, 00:20:04.324 "ddgst": ${ddgst:-false} 00:20:04.324 }, 00:20:04.324 "method": "bdev_nvme_attach_controller" 00:20:04.324 } 00:20:04.324 EOF 00:20:04.324 )") 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:04.324 "params": { 00:20:04.324 "name": "Nvme0", 00:20:04.324 "trtype": "tcp", 00:20:04.324 "traddr": "10.0.0.3", 00:20:04.324 "adrfam": "ipv4", 00:20:04.324 "trsvcid": "4420", 00:20:04.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:04.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:04.324 "hdgst": false, 00:20:04.324 "ddgst": false 00:20:04.324 }, 00:20:04.324 "method": "bdev_nvme_attach_controller" 00:20:04.324 }' 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:04.324 08:54:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:04.324 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:04.324 ... 00:20:04.324 fio-3.35 00:20:04.324 Starting 3 threads 00:20:09.595 00:20:09.595 filename0: (groupid=0, jobs=1): err= 0: pid=83973: Wed Dec 11 08:54:17 2024 00:20:09.595 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(153MiB/5001msec) 00:20:09.595 slat (nsec): min=6757, max=41184, avg=11250.74, stdev=4797.40 00:20:09.595 clat (usec): min=11557, max=14788, avg=12197.59, stdev=302.39 00:20:09.595 lat (usec): min=11564, max=14813, avg=12208.84, stdev=302.87 00:20:09.595 clat percentiles (usec): 00:20:09.595 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:20:09.595 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:20:09.595 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:09.595 | 99.00th=[12780], 99.50th=[12911], 99.90th=[14746], 99.95th=[14746], 00:20:09.595 | 99.99th=[14746] 00:20:09.595 bw ( KiB/s): min=30658, max=31488, per=33.21%, avg=31310.44, stdev=352.67, samples=9 00:20:09.595 iops : min= 239, max= 246, avg=244.56, stdev= 2.88, samples=9 00:20:09.595 lat (msec) : 20=100.00% 00:20:09.595 cpu : usr=90.86%, sys=8.50%, ctx=8, majf=0, minf=0 00:20:09.595 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.595 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.595 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:09.595 filename0: (groupid=0, jobs=1): err= 0: pid=83974: Wed Dec 11 08:54:17 2024 00:20:09.595 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(154MiB/5006msec) 00:20:09.595 slat (nsec): min=8139, max=49533, avg=15185.58, stdev=3921.46 00:20:09.595 clat (usec): min=8381, max=13373, avg=12174.61, stdev=359.45 00:20:09.595 lat (usec): min=8395, max=13398, avg=12189.80, stdev=359.62 00:20:09.595 clat percentiles (usec): 00:20:09.595 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:20:09.595 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:20:09.595 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:09.595 | 99.00th=[12911], 99.50th=[12911], 99.90th=[13304], 99.95th=[13435], 00:20:09.595 | 99.99th=[13435] 00:20:09.595 bw ( KiB/s): min=30720, max=32256, per=33.32%, avg=31411.20, stdev=435.95, samples=10 00:20:09.595 iops : min= 240, max= 252, avg=245.40, stdev= 3.41, samples=10 00:20:09.595 lat (msec) : 10=0.49%, 20=99.51% 00:20:09.595 cpu : usr=91.09%, sys=8.33%, ctx=6, majf=0, minf=0 00:20:09.595 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.595 issued rwts: total=1230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.595 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:09.595 filename0: (groupid=0, jobs=1): err= 0: pid=83975: Wed Dec 11 08:54:17 2024 00:20:09.595 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(154MiB/5006msec) 00:20:09.595 slat (nsec): min=7915, max=50486, avg=15814.87, stdev=4132.78 00:20:09.595 clat (usec): min=8384, max=13399, avg=12172.66, stdev=359.59 00:20:09.595 lat (usec): min=8397, max=13426, avg=12188.48, stdev=359.84 00:20:09.595 clat percentiles (usec): 00:20:09.595 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:20:09.595 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:20:09.595 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:09.596 | 99.00th=[12911], 99.50th=[12911], 99.90th=[13435], 99.95th=[13435], 00:20:09.596 | 99.99th=[13435] 00:20:09.596 bw ( KiB/s): min=30720, max=32256, per=33.32%, avg=31411.20, stdev=435.95, samples=10 00:20:09.596 iops : min= 240, max= 252, avg=245.40, stdev= 3.41, samples=10 00:20:09.596 lat (msec) : 10=0.49%, 20=99.51% 00:20:09.596 cpu : usr=91.09%, sys=8.33%, ctx=10, majf=0, minf=0 00:20:09.596 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.596 issued rwts: total=1230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.596 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:09.596 00:20:09.596 Run status group 0 (all jobs): 00:20:09.596 READ: bw=92.1MiB/s (96.5MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=461MiB (483MB), run=5001-5006msec 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.855 bdev_null0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.855 [2024-12-11 08:54:17.573642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:09.855 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.856 bdev_null1 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.856 bdev_null2 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.856 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:10.115 { 00:20:10.115 "params": { 00:20:10.115 "name": "Nvme$subsystem", 00:20:10.115 "trtype": "$TEST_TRANSPORT", 00:20:10.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.115 "adrfam": "ipv4", 00:20:10.115 "trsvcid": "$NVMF_PORT", 00:20:10.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.115 "hdgst": ${hdgst:-false}, 00:20:10.115 "ddgst": ${ddgst:-false} 00:20:10.115 }, 00:20:10.115 "method": "bdev_nvme_attach_controller" 00:20:10.115 } 00:20:10.115 EOF 00:20:10.115 )") 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:10.115 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:10.116 { 00:20:10.116 "params": { 00:20:10.116 "name": "Nvme$subsystem", 00:20:10.116 "trtype": "$TEST_TRANSPORT", 00:20:10.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.116 "adrfam": "ipv4", 00:20:10.116 "trsvcid": "$NVMF_PORT", 00:20:10.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.116 "hdgst": ${hdgst:-false}, 00:20:10.116 "ddgst": ${ddgst:-false} 00:20:10.116 }, 00:20:10.116 "method": "bdev_nvme_attach_controller" 00:20:10.116 } 00:20:10.116 EOF 00:20:10.116 )") 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:10.116 { 00:20:10.116 "params": { 00:20:10.116 "name": "Nvme$subsystem", 00:20:10.116 "trtype": "$TEST_TRANSPORT", 00:20:10.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.116 "adrfam": "ipv4", 00:20:10.116 "trsvcid": "$NVMF_PORT", 00:20:10.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.116 "hdgst": ${hdgst:-false}, 00:20:10.116 "ddgst": ${ddgst:-false} 00:20:10.116 }, 00:20:10.116 "method": "bdev_nvme_attach_controller" 00:20:10.116 } 00:20:10.116 EOF 00:20:10.116 )") 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:10.116 "params": { 00:20:10.116 "name": "Nvme0", 00:20:10.116 "trtype": "tcp", 00:20:10.116 "traddr": "10.0.0.3", 00:20:10.116 "adrfam": "ipv4", 00:20:10.116 "trsvcid": "4420", 00:20:10.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:10.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:10.116 "hdgst": false, 00:20:10.116 "ddgst": false 00:20:10.116 }, 00:20:10.116 "method": "bdev_nvme_attach_controller" 00:20:10.116 },{ 00:20:10.116 "params": { 00:20:10.116 "name": "Nvme1", 00:20:10.116 "trtype": "tcp", 00:20:10.116 "traddr": "10.0.0.3", 00:20:10.116 "adrfam": "ipv4", 00:20:10.116 "trsvcid": "4420", 00:20:10.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.116 "hdgst": false, 00:20:10.116 "ddgst": false 00:20:10.116 }, 00:20:10.116 "method": "bdev_nvme_attach_controller" 00:20:10.116 },{ 00:20:10.116 "params": { 00:20:10.116 "name": "Nvme2", 00:20:10.116 "trtype": "tcp", 00:20:10.116 "traddr": "10.0.0.3", 00:20:10.116 "adrfam": "ipv4", 00:20:10.116 "trsvcid": "4420", 00:20:10.116 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:10.116 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:10.116 "hdgst": false, 00:20:10.116 "ddgst": false 00:20:10.116 }, 00:20:10.116 "method": "bdev_nvme_attach_controller" 00:20:10.116 }' 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:10.116 08:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:10.116 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:10.116 ... 00:20:10.116 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:10.116 ... 00:20:10.116 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:10.116 ... 00:20:10.116 fio-3.35 00:20:10.116 Starting 24 threads 00:20:22.319 00:20:22.319 filename0: (groupid=0, jobs=1): err= 0: pid=84070: Wed Dec 11 08:54:28 2024 00:20:22.319 read: IOPS=229, BW=916KiB/s (938kB/s)(9200KiB/10040msec) 00:20:22.319 slat (usec): min=8, max=8025, avg=21.11, stdev=193.85 00:20:22.319 clat (msec): min=13, max=128, avg=69.64, stdev=19.00 00:20:22.319 lat (msec): min=13, max=128, avg=69.66, stdev=18.99 00:20:22.319 clat percentiles (msec): 00:20:22.319 | 1.00th=[ 16], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:20:22.319 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 74], 00:20:22.319 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 105], 00:20:22.319 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 128], 00:20:22.319 | 99.99th=[ 129] 00:20:22.319 bw ( KiB/s): min= 784, max= 1152, per=4.27%, avg=915.50, stdev=87.99, samples=20 00:20:22.319 iops : min= 196, max= 288, avg=228.80, stdev=22.04, samples=20 00:20:22.319 lat (msec) : 20=1.30%, 50=14.65%, 100=76.96%, 250=7.09% 00:20:22.319 cpu : usr=42.11%, sys=2.52%, ctx=1439, majf=0, minf=9 00:20:22.319 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:22.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.319 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.319 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.319 filename0: (groupid=0, jobs=1): err= 0: pid=84071: Wed Dec 11 08:54:28 2024 00:20:22.319 read: IOPS=225, BW=902KiB/s (924kB/s)(9072KiB/10054msec) 00:20:22.319 slat (usec): min=4, max=8025, avg=34.39, stdev=335.20 00:20:22.319 clat (msec): min=13, max=134, avg=70.71, stdev=18.38 00:20:22.319 lat (msec): min=13, max=134, avg=70.74, stdev=18.39 00:20:22.319 clat percentiles (msec): 00:20:22.319 | 1.00th=[ 28], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:20:22.319 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:22.319 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 105], 00:20:22.319 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 130], 00:20:22.319 | 99.99th=[ 136] 00:20:22.319 bw ( KiB/s): min= 792, max= 1090, per=4.20%, avg=900.75, stdev=79.39, samples=20 00:20:22.319 iops : min= 198, max= 272, avg=225.15, stdev=19.77, samples=20 00:20:22.319 lat (msec) : 20=0.71%, 50=13.10%, 100=79.14%, 250=7.05% 00:20:22.319 cpu : usr=40.61%, sys=2.38%, ctx=1224, majf=0, minf=9 00:20:22.319 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:22.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.319 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.319 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.319 filename0: (groupid=0, jobs=1): err= 0: pid=84072: Wed Dec 11 08:54:28 2024 00:20:22.319 read: IOPS=216, BW=864KiB/s (885kB/s)(8648KiB/10008msec) 00:20:22.319 slat (usec): min=3, max=4032, avg=21.54, stdev=145.13 00:20:22.319 clat (msec): min=9, max=135, avg=73.91, stdev=18.98 00:20:22.319 lat (msec): min=9, max=136, avg=73.93, stdev=18.97 00:20:22.319 clat percentiles (msec): 00:20:22.319 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 57], 00:20:22.319 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 77], 00:20:22.319 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 109], 00:20:22.319 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 136], 00:20:22.319 | 99.99th=[ 136] 00:20:22.319 bw ( KiB/s): min= 641, max= 976, per=3.94%, avg=845.53, stdev=98.55, samples=19 00:20:22.319 iops : min= 160, max= 244, avg=211.37, stdev=24.67, samples=19 00:20:22.319 lat (msec) : 10=0.28%, 20=0.46%, 50=8.56%, 100=79.69%, 250=11.01% 00:20:22.319 cpu : usr=43.79%, sys=2.68%, ctx=1811, majf=0, minf=9 00:20:22.319 IO depths : 1=0.1%, 2=2.7%, 4=10.9%, 8=72.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:22.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=89.9%, 8=7.7%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename0: (groupid=0, jobs=1): err= 0: pid=84073: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=224, BW=899KiB/s (920kB/s)(9024KiB/10041msec) 00:20:22.320 slat (usec): min=4, max=8026, avg=24.18, stdev=252.95 00:20:22.320 clat (msec): min=27, max=132, avg=71.08, stdev=18.38 00:20:22.320 lat (msec): min=27, max=132, avg=71.10, stdev=18.37 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:20:22.320 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:22.320 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:20:22.320 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:20:22.320 | 99.99th=[ 133] 00:20:22.320 bw ( KiB/s): min= 712, max= 1048, per=4.17%, avg=895.75, stdev=88.85, samples=20 00:20:22.320 iops : min= 178, max= 262, avg=223.90, stdev=22.19, samples=20 00:20:22.320 lat (msec) : 50=16.13%, 100=76.68%, 250=7.18% 00:20:22.320 cpu : usr=38.71%, sys=2.32%, ctx=1115, majf=0, minf=9 00:20:22.320 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename0: (groupid=0, jobs=1): err= 0: pid=84074: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=226, BW=905KiB/s (926kB/s)(9064KiB/10020msec) 00:20:22.320 slat (usec): min=3, max=8026, avg=29.22, stdev=303.99 00:20:22.320 clat (msec): min=35, max=132, avg=70.61, stdev=18.15 00:20:22.320 lat (msec): min=35, max=132, avg=70.64, stdev=18.14 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:20:22.320 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.320 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:20:22.320 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:20:22.320 | 99.99th=[ 133] 00:20:22.320 bw ( KiB/s): min= 640, max= 1024, per=4.20%, avg=900.90, stdev=106.93, samples=20 00:20:22.320 iops : min= 160, max= 256, avg=225.20, stdev=26.74, samples=20 00:20:22.320 lat (msec) : 50=17.17%, 100=75.51%, 250=7.33% 00:20:22.320 cpu : usr=37.23%, sys=2.45%, ctx=1055, majf=0, minf=9 00:20:22.320 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename0: (groupid=0, jobs=1): err= 0: pid=84075: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=221, BW=886KiB/s (907kB/s)(8872KiB/10013msec) 00:20:22.320 slat (usec): min=4, max=8035, avg=29.78, stdev=342.83 00:20:22.320 clat (msec): min=35, max=126, avg=72.05, stdev=17.05 00:20:22.320 lat (msec): min=35, max=126, avg=72.08, stdev=17.04 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:22.320 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.320 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 105], 00:20:22.320 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 125], 00:20:22.320 | 99.99th=[ 127] 00:20:22.320 bw ( KiB/s): min= 752, max= 1024, per=4.12%, avg=883.65, stdev=81.79, samples=20 00:20:22.320 iops : min= 188, max= 256, avg=220.90, stdev=20.46, samples=20 00:20:22.320 lat (msec) : 50=16.82%, 100=76.92%, 250=6.27% 00:20:22.320 cpu : usr=32.77%, sys=1.78%, ctx=901, majf=0, minf=9 00:20:22.320 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=74.8%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=89.1%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename0: (groupid=0, jobs=1): err= 0: pid=84076: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=227, BW=912KiB/s (934kB/s)(9120KiB/10001msec) 00:20:22.320 slat (usec): min=4, max=8032, avg=38.78, stdev=401.95 00:20:22.320 clat (usec): min=1132, max=159596, avg=69989.66, stdev=22061.71 00:20:22.320 lat (usec): min=1141, max=159608, avg=70028.44, stdev=22052.76 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 3], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:20:22.320 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.320 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 108], 00:20:22.320 | 99.00th=[ 122], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 161], 00:20:22.320 | 99.99th=[ 161] 00:20:22.320 bw ( KiB/s): min= 512, max= 1024, per=4.10%, avg=880.32, stdev=112.89, samples=19 00:20:22.320 iops : min= 128, max= 256, avg=220.05, stdev=28.21, samples=19 00:20:22.320 lat (msec) : 2=0.70%, 4=1.01%, 20=0.66%, 50=18.64%, 100=70.39% 00:20:22.320 lat (msec) : 250=8.60% 00:20:22.320 cpu : usr=34.89%, sys=1.92%, ctx=976, majf=0, minf=9 00:20:22.320 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename0: (groupid=0, jobs=1): err= 0: pid=84077: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=202, BW=812KiB/s (831kB/s)(8144KiB/10031msec) 00:20:22.320 slat (usec): min=3, max=8028, avg=26.19, stdev=274.58 00:20:22.320 clat (msec): min=38, max=144, avg=78.65, stdev=18.52 00:20:22.320 lat (msec): min=38, max=144, avg=78.68, stdev=18.52 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 66], 00:20:22.320 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 81], 00:20:22.320 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 112], 00:20:22.320 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:20:22.320 | 99.99th=[ 144] 00:20:22.320 bw ( KiB/s): min= 640, max= 976, per=3.76%, avg=807.90, stdev=107.78, samples=20 00:20:22.320 iops : min= 160, max= 244, avg=201.95, stdev=26.92, samples=20 00:20:22.320 lat (msec) : 50=7.37%, 100=80.70%, 250=11.94% 00:20:22.320 cpu : usr=38.11%, sys=2.18%, ctx=1363, majf=0, minf=9 00:20:22.320 IO depths : 1=0.1%, 2=4.1%, 4=16.4%, 8=65.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=91.9%, 8=4.4%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename1: (groupid=0, jobs=1): err= 0: pid=84078: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=234, BW=938KiB/s (960kB/s)(9436KiB/10065msec) 00:20:22.320 slat (usec): min=3, max=8026, avg=16.69, stdev=165.06 00:20:22.320 clat (usec): min=1540, max=148935, avg=68050.55, stdev=25934.11 00:20:22.320 lat (usec): min=1548, max=148950, avg=68067.24, stdev=25935.27 00:20:22.320 clat percentiles (usec): 00:20:22.320 | 1.00th=[ 1647], 5.00th=[ 3228], 10.00th=[ 38536], 20.00th=[ 52167], 00:20:22.320 | 30.00th=[ 61604], 40.00th=[ 69731], 50.00th=[ 71828], 60.00th=[ 73925], 00:20:22.320 | 70.00th=[ 79168], 80.00th=[ 83362], 90.00th=[ 95945], 95.00th=[107480], 00:20:22.320 | 99.00th=[124257], 99.50th=[126354], 99.90th=[139461], 99.95th=[143655], 00:20:22.320 | 99.99th=[149947] 00:20:22.320 bw ( KiB/s): min= 798, max= 2304, per=4.37%, avg=937.00, stdev=324.16, samples=20 00:20:22.320 iops : min= 199, max= 576, avg=234.20, stdev=81.06, samples=20 00:20:22.320 lat (msec) : 2=2.03%, 4=3.98%, 10=1.44%, 20=1.36%, 50=9.03% 00:20:22.320 lat (msec) : 100=74.74%, 250=7.42% 00:20:22.320 cpu : usr=37.03%, sys=2.18%, ctx=1195, majf=0, minf=0 00:20:22.320 IO depths : 1=0.4%, 2=1.3%, 4=3.9%, 8=78.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename1: (groupid=0, jobs=1): err= 0: pid=84079: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=222, BW=889KiB/s (911kB/s)(8916KiB/10025msec) 00:20:22.320 slat (usec): min=3, max=8024, avg=25.16, stdev=293.75 00:20:22.320 clat (msec): min=35, max=132, avg=71.81, stdev=18.34 00:20:22.320 lat (msec): min=35, max=132, avg=71.84, stdev=18.35 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 54], 00:20:22.320 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.320 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:22.320 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:20:22.320 | 99.99th=[ 133] 00:20:22.320 bw ( KiB/s): min= 752, max= 1048, per=4.13%, avg=886.40, stdev=80.32, samples=20 00:20:22.320 iops : min= 188, max= 262, avg=221.60, stdev=20.08, samples=20 00:20:22.320 lat (msec) : 50=17.45%, 100=74.38%, 250=8.17% 00:20:22.320 cpu : usr=32.87%, sys=2.02%, ctx=902, majf=0, minf=9 00:20:22.320 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:22.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.320 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.320 filename1: (groupid=0, jobs=1): err= 0: pid=84080: Wed Dec 11 08:54:28 2024 00:20:22.320 read: IOPS=225, BW=902KiB/s (924kB/s)(9044KiB/10025msec) 00:20:22.320 slat (usec): min=4, max=10040, avg=32.61, stdev=397.20 00:20:22.320 clat (msec): min=26, max=135, avg=70.76, stdev=18.05 00:20:22.320 lat (msec): min=26, max=135, avg=70.79, stdev=18.06 00:20:22.320 clat percentiles (msec): 00:20:22.320 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:20:22.320 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.320 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 107], 00:20:22.320 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:22.320 | 99.99th=[ 136] 00:20:22.320 bw ( KiB/s): min= 640, max= 1024, per=4.18%, avg=897.90, stdev=91.39, samples=20 00:20:22.320 iops : min= 160, max= 256, avg=224.45, stdev=22.82, samples=20 00:20:22.321 lat (msec) : 50=16.32%, 100=76.74%, 250=6.94% 00:20:22.321 cpu : usr=35.87%, sys=2.07%, ctx=1066, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename1: (groupid=0, jobs=1): err= 0: pid=84081: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=218, BW=876KiB/s (897kB/s)(8764KiB/10005msec) 00:20:22.321 slat (usec): min=4, max=8025, avg=18.73, stdev=171.19 00:20:22.321 clat (msec): min=11, max=132, avg=72.96, stdev=18.06 00:20:22.321 lat (msec): min=11, max=132, avg=72.98, stdev=18.06 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:22.321 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:20:22.321 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:22.321 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 133], 00:20:22.321 | 99.99th=[ 133] 00:20:22.321 bw ( KiB/s): min= 619, max= 968, per=4.01%, avg=859.00, stdev=82.58, samples=19 00:20:22.321 iops : min= 154, max= 242, avg=214.68, stdev=20.75, samples=19 00:20:22.321 lat (msec) : 20=0.68%, 50=12.46%, 100=79.28%, 250=7.58% 00:20:22.321 cpu : usr=31.07%, sys=1.82%, ctx=839, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=74.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=89.4%, 8=8.8%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename1: (groupid=0, jobs=1): err= 0: pid=84082: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=233, BW=935KiB/s (957kB/s)(9352KiB/10002msec) 00:20:22.321 slat (usec): min=4, max=4034, avg=22.84, stdev=166.00 00:20:22.321 clat (msec): min=11, max=123, avg=68.34, stdev=18.82 00:20:22.321 lat (msec): min=11, max=123, avg=68.36, stdev=18.82 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:20:22.321 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 73], 00:20:22.321 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 106], 00:20:22.321 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 124], 00:20:22.321 | 99.99th=[ 124] 00:20:22.321 bw ( KiB/s): min= 637, max= 1080, per=4.28%, avg=918.05, stdev=92.36, samples=19 00:20:22.321 iops : min= 159, max= 270, avg=229.47, stdev=23.13, samples=19 00:20:22.321 lat (msec) : 20=0.68%, 50=19.93%, 100=71.73%, 250=7.66% 00:20:22.321 cpu : usr=39.91%, sys=2.30%, ctx=1202, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=87.3%, 8=12.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename1: (groupid=0, jobs=1): err= 0: pid=84083: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=225, BW=902KiB/s (924kB/s)(9064KiB/10044msec) 00:20:22.321 slat (usec): min=8, max=11024, avg=24.01, stdev=259.78 00:20:22.321 clat (msec): min=13, max=131, avg=70.67, stdev=18.44 00:20:22.321 lat (msec): min=13, max=131, avg=70.70, stdev=18.44 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 16], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:22.321 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:22.321 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 105], 00:20:22.321 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 122], 99.95th=[ 128], 00:20:22.321 | 99.99th=[ 132] 00:20:22.321 bw ( KiB/s): min= 672, max= 1152, per=4.20%, avg=901.90, stdev=101.94, samples=20 00:20:22.321 iops : min= 168, max= 288, avg=225.40, stdev=25.50, samples=20 00:20:22.321 lat (msec) : 20=1.32%, 50=11.69%, 100=79.39%, 250=7.59% 00:20:22.321 cpu : usr=42.63%, sys=2.41%, ctx=1614, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename1: (groupid=0, jobs=1): err= 0: pid=84084: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=227, BW=908KiB/s (930kB/s)(9116KiB/10038msec) 00:20:22.321 slat (usec): min=4, max=8025, avg=29.67, stdev=335.37 00:20:22.321 clat (msec): min=27, max=131, avg=70.35, stdev=17.58 00:20:22.321 lat (msec): min=27, max=131, avg=70.38, stdev=17.59 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 51], 00:20:22.321 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:20:22.321 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:22.321 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 123], 00:20:22.321 | 99.99th=[ 132] 00:20:22.321 bw ( KiB/s): min= 760, max= 1072, per=4.22%, avg=904.80, stdev=71.55, samples=20 00:20:22.321 iops : min= 190, max= 268, avg=226.20, stdev=17.89, samples=20 00:20:22.321 lat (msec) : 50=19.48%, 100=74.55%, 250=5.97% 00:20:22.321 cpu : usr=30.93%, sys=1.89%, ctx=845, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename1: (groupid=0, jobs=1): err= 0: pid=84085: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=234, BW=937KiB/s (959kB/s)(9388KiB/10020msec) 00:20:22.321 slat (usec): min=4, max=8031, avg=30.35, stdev=319.97 00:20:22.321 clat (msec): min=25, max=132, avg=68.15, stdev=18.48 00:20:22.321 lat (msec): min=25, max=132, avg=68.18, stdev=18.48 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:20:22.321 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:20:22.321 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 104], 00:20:22.321 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 133], 00:20:22.321 | 99.99th=[ 133] 00:20:22.321 bw ( KiB/s): min= 784, max= 1120, per=4.36%, avg=934.10, stdev=77.73, samples=20 00:20:22.321 iops : min= 196, max= 280, avg=233.50, stdev=19.47, samples=20 00:20:22.321 lat (msec) : 50=23.65%, 100=70.00%, 250=6.35% 00:20:22.321 cpu : usr=34.32%, sys=2.07%, ctx=976, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename2: (groupid=0, jobs=1): err= 0: pid=84086: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=214, BW=857KiB/s (878kB/s)(8584KiB/10015msec) 00:20:22.321 slat (usec): min=4, max=8027, avg=37.91, stdev=423.01 00:20:22.321 clat (msec): min=19, max=147, avg=74.47, stdev=18.41 00:20:22.321 lat (msec): min=19, max=147, avg=74.51, stdev=18.43 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:22.321 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:20:22.321 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 108], 00:20:22.321 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 148], 00:20:22.321 | 99.99th=[ 148] 00:20:22.321 bw ( KiB/s): min= 640, max= 1024, per=3.99%, avg=855.90, stdev=83.52, samples=20 00:20:22.321 iops : min= 160, max= 256, avg=213.95, stdev=20.88, samples=20 00:20:22.321 lat (msec) : 20=0.28%, 50=13.37%, 100=77.49%, 250=8.85% 00:20:22.321 cpu : usr=30.95%, sys=1.87%, ctx=838, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=1.6%, 4=6.1%, 8=76.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename2: (groupid=0, jobs=1): err= 0: pid=84087: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=215, BW=861KiB/s (881kB/s)(8616KiB/10009msec) 00:20:22.321 slat (usec): min=4, max=8031, avg=27.33, stdev=298.90 00:20:22.321 clat (msec): min=11, max=145, avg=74.24, stdev=18.10 00:20:22.321 lat (msec): min=11, max=145, avg=74.27, stdev=18.11 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 58], 00:20:22.321 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 78], 00:20:22.321 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 107], 00:20:22.321 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 146], 00:20:22.321 | 99.99th=[ 146] 00:20:22.321 bw ( KiB/s): min= 640, max= 944, per=3.94%, avg=845.05, stdev=92.69, samples=19 00:20:22.321 iops : min= 160, max= 236, avg=211.26, stdev=23.17, samples=19 00:20:22.321 lat (msec) : 20=0.60%, 50=10.21%, 100=81.62%, 250=7.57% 00:20:22.321 cpu : usr=36.68%, sys=2.24%, ctx=1056, majf=0, minf=9 00:20:22.321 IO depths : 1=0.1%, 2=2.4%, 4=9.2%, 8=73.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:22.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 complete : 0=0.0%, 4=89.7%, 8=8.3%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.321 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.321 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.321 filename2: (groupid=0, jobs=1): err= 0: pid=84088: Wed Dec 11 08:54:28 2024 00:20:22.321 read: IOPS=231, BW=927KiB/s (950kB/s)(9324KiB/10055msec) 00:20:22.321 slat (usec): min=3, max=8021, avg=17.15, stdev=165.92 00:20:22.321 clat (msec): min=2, max=143, avg=68.82, stdev=22.98 00:20:22.321 lat (msec): min=2, max=143, avg=68.84, stdev=22.98 00:20:22.321 clat percentiles (msec): 00:20:22.321 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 48], 20.00th=[ 57], 00:20:22.321 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.321 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:20:22.321 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 123], 00:20:22.321 | 99.99th=[ 144] 00:20:22.321 bw ( KiB/s): min= 768, max= 2064, per=4.31%, avg=925.90, stdev=273.42, samples=20 00:20:22.322 iops : min= 192, max= 516, avg=231.45, stdev=68.35, samples=20 00:20:22.322 lat (msec) : 4=2.75%, 10=2.66%, 20=0.77%, 50=10.98%, 100=76.83% 00:20:22.322 lat (msec) : 250=6.01% 00:20:22.322 cpu : usr=31.60%, sys=1.74%, ctx=901, majf=0, minf=0 00:20:22.322 IO depths : 1=0.3%, 2=0.8%, 4=2.0%, 8=80.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.322 filename2: (groupid=0, jobs=1): err= 0: pid=84089: Wed Dec 11 08:54:28 2024 00:20:22.322 read: IOPS=228, BW=912KiB/s (934kB/s)(9160KiB/10043msec) 00:20:22.322 slat (usec): min=3, max=8023, avg=29.01, stdev=269.20 00:20:22.322 clat (msec): min=14, max=143, avg=69.93, stdev=18.45 00:20:22.322 lat (msec): min=14, max=143, avg=69.96, stdev=18.44 00:20:22.322 clat percentiles (msec): 00:20:22.322 | 1.00th=[ 16], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:22.322 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:22.322 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 105], 00:20:22.322 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 136], 00:20:22.322 | 99.99th=[ 144] 00:20:22.322 bw ( KiB/s): min= 808, max= 1248, per=4.25%, avg=911.50, stdev=99.94, samples=20 00:20:22.322 iops : min= 202, max= 312, avg=227.80, stdev=24.98, samples=20 00:20:22.322 lat (msec) : 20=1.31%, 50=14.54%, 100=77.60%, 250=6.55% 00:20:22.322 cpu : usr=37.50%, sys=2.33%, ctx=1175, majf=0, minf=9 00:20:22.322 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.322 filename2: (groupid=0, jobs=1): err= 0: pid=84090: Wed Dec 11 08:54:28 2024 00:20:22.322 read: IOPS=219, BW=878KiB/s (899kB/s)(8788KiB/10009msec) 00:20:22.322 slat (usec): min=3, max=8026, avg=29.49, stdev=296.07 00:20:22.322 clat (msec): min=14, max=131, avg=72.70, stdev=18.56 00:20:22.322 lat (msec): min=14, max=131, avg=72.73, stdev=18.57 00:20:22.322 clat percentiles (msec): 00:20:22.322 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 54], 00:20:22.322 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 77], 00:20:22.322 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 107], 00:20:22.322 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 132], 00:20:22.322 | 99.99th=[ 132] 00:20:22.322 bw ( KiB/s): min= 640, max= 976, per=4.02%, avg=861.47, stdev=92.80, samples=19 00:20:22.322 iops : min= 160, max= 244, avg=215.37, stdev=23.20, samples=19 00:20:22.322 lat (msec) : 20=0.46%, 50=12.93%, 100=76.51%, 250=10.10% 00:20:22.322 cpu : usr=42.07%, sys=2.67%, ctx=1606, majf=0, minf=10 00:20:22.322 IO depths : 1=0.1%, 2=2.1%, 4=8.1%, 8=75.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 complete : 0=0.0%, 4=89.0%, 8=9.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.322 filename2: (groupid=0, jobs=1): err= 0: pid=84091: Wed Dec 11 08:54:28 2024 00:20:22.322 read: IOPS=216, BW=868KiB/s (889kB/s)(8712KiB/10039msec) 00:20:22.322 slat (usec): min=8, max=8033, avg=21.62, stdev=242.86 00:20:22.322 clat (msec): min=15, max=122, avg=73.53, stdev=17.92 00:20:22.322 lat (msec): min=15, max=122, avg=73.55, stdev=17.91 00:20:22.322 clat percentiles (msec): 00:20:22.322 | 1.00th=[ 16], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 60], 00:20:22.322 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:20:22.322 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 105], 00:20:22.322 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:20:22.322 | 99.99th=[ 124] 00:20:22.322 bw ( KiB/s): min= 768, max= 1136, per=4.04%, avg=866.70, stdev=90.07, samples=20 00:20:22.322 iops : min= 192, max= 284, avg=216.60, stdev=22.55, samples=20 00:20:22.322 lat (msec) : 20=1.38%, 50=10.24%, 100=81.96%, 250=6.43% 00:20:22.322 cpu : usr=37.21%, sys=2.04%, ctx=1070, majf=0, minf=9 00:20:22.322 IO depths : 1=0.1%, 2=1.7%, 4=6.7%, 8=75.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.322 filename2: (groupid=0, jobs=1): err= 0: pid=84092: Wed Dec 11 08:54:28 2024 00:20:22.322 read: IOPS=226, BW=905KiB/s (926kB/s)(9068KiB/10025msec) 00:20:22.322 slat (usec): min=8, max=8027, avg=26.76, stdev=303.25 00:20:22.322 clat (msec): min=25, max=142, avg=70.60, stdev=18.92 00:20:22.322 lat (msec): min=25, max=142, avg=70.62, stdev=18.92 00:20:22.322 clat percentiles (msec): 00:20:22.322 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:20:22.322 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:20:22.322 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:22.322 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 144], 00:20:22.322 | 99.99th=[ 144] 00:20:22.322 bw ( KiB/s): min= 640, max= 1024, per=4.20%, avg=900.30, stdev=110.71, samples=20 00:20:22.322 iops : min= 160, max= 256, avg=225.05, stdev=27.65, samples=20 00:20:22.322 lat (msec) : 50=21.35%, 100=70.14%, 250=8.51% 00:20:22.322 cpu : usr=31.03%, sys=1.96%, ctx=857, majf=0, minf=9 00:20:22.322 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=79.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.322 filename2: (groupid=0, jobs=1): err= 0: pid=84093: Wed Dec 11 08:54:28 2024 00:20:22.322 read: IOPS=233, BW=932KiB/s (954kB/s)(9336KiB/10017msec) 00:20:22.322 slat (usec): min=4, max=5027, avg=24.86, stdev=195.67 00:20:22.322 clat (msec): min=26, max=123, avg=68.51, stdev=17.43 00:20:22.322 lat (msec): min=26, max=123, avg=68.53, stdev=17.42 00:20:22.322 clat percentiles (msec): 00:20:22.322 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:20:22.322 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 73], 00:20:22.322 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 102], 00:20:22.322 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 124], 00:20:22.322 | 99.99th=[ 124] 00:20:22.322 bw ( KiB/s): min= 768, max= 1048, per=4.33%, avg=929.20, stdev=69.80, samples=20 00:20:22.322 iops : min= 192, max= 262, avg=232.30, stdev=17.45, samples=20 00:20:22.322 lat (msec) : 50=16.67%, 100=78.11%, 250=5.23% 00:20:22.322 cpu : usr=43.45%, sys=2.28%, ctx=1377, majf=0, minf=9 00:20:22.322 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:22.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 complete : 0=0.0%, 4=87.3%, 8=12.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.322 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:22.322 00:20:22.322 Run status group 0 (all jobs): 00:20:22.322 READ: bw=20.9MiB/s (22.0MB/s), 812KiB/s-938KiB/s (831kB/s-960kB/s), io=211MiB (221MB), run=10001-10065msec 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.322 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 bdev_null0 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 [2024-12-11 08:54:28.866241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 bdev_null1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.323 { 00:20:22.323 "params": { 00:20:22.323 "name": "Nvme$subsystem", 00:20:22.323 "trtype": "$TEST_TRANSPORT", 00:20:22.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.323 "adrfam": "ipv4", 00:20:22.323 "trsvcid": "$NVMF_PORT", 00:20:22.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.323 "hdgst": ${hdgst:-false}, 00:20:22.323 "ddgst": ${ddgst:-false} 00:20:22.323 }, 00:20:22.323 "method": "bdev_nvme_attach_controller" 00:20:22.323 } 00:20:22.323 EOF 00:20:22.323 )") 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.323 { 00:20:22.323 "params": { 00:20:22.323 "name": "Nvme$subsystem", 00:20:22.323 "trtype": "$TEST_TRANSPORT", 00:20:22.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.323 "adrfam": "ipv4", 00:20:22.323 "trsvcid": "$NVMF_PORT", 00:20:22.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.323 "hdgst": ${hdgst:-false}, 00:20:22.323 "ddgst": ${ddgst:-false} 00:20:22.323 }, 00:20:22.323 "method": "bdev_nvme_attach_controller" 00:20:22.323 } 00:20:22.323 EOF 00:20:22.323 )") 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:22.323 "params": { 00:20:22.323 "name": "Nvme0", 00:20:22.323 "trtype": "tcp", 00:20:22.323 "traddr": "10.0.0.3", 00:20:22.323 "adrfam": "ipv4", 00:20:22.323 "trsvcid": "4420", 00:20:22.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:22.323 "hdgst": false, 00:20:22.323 "ddgst": false 00:20:22.323 }, 00:20:22.323 "method": "bdev_nvme_attach_controller" 00:20:22.323 },{ 00:20:22.323 "params": { 00:20:22.323 "name": "Nvme1", 00:20:22.323 "trtype": "tcp", 00:20:22.323 "traddr": "10.0.0.3", 00:20:22.323 "adrfam": "ipv4", 00:20:22.323 "trsvcid": "4420", 00:20:22.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.323 "hdgst": false, 00:20:22.323 "ddgst": false 00:20:22.323 }, 00:20:22.323 "method": "bdev_nvme_attach_controller" 00:20:22.323 }' 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:22.323 08:54:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.323 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:22.323 ... 00:20:22.323 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:22.323 ... 00:20:22.323 fio-3.35 00:20:22.323 Starting 4 threads 00:20:27.608 00:20:27.608 filename0: (groupid=0, jobs=1): err= 0: pid=84235: Wed Dec 11 08:54:34 2024 00:20:27.608 read: IOPS=2159, BW=16.9MiB/s (17.7MB/s)(84.4MiB/5002msec) 00:20:27.608 slat (usec): min=6, max=290, avg=11.70, stdev= 5.96 00:20:27.608 clat (usec): min=627, max=7306, avg=3669.82, stdev=1118.18 00:20:27.608 lat (usec): min=635, max=7319, avg=3681.52, stdev=1118.92 00:20:27.608 clat percentiles (usec): 00:20:27.608 | 1.00th=[ 1336], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 2999], 00:20:27.608 | 30.00th=[ 3326], 40.00th=[ 3589], 50.00th=[ 3982], 60.00th=[ 4146], 00:20:27.608 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5211], 00:20:27.608 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6915], 99.95th=[ 7046], 00:20:27.608 | 99.99th=[ 7111] 00:20:27.608 bw ( KiB/s): min=14608, max=20384, per=27.69%, avg=17824.00, stdev=2374.59, samples=9 00:20:27.608 iops : min= 1826, max= 2548, avg=2228.00, stdev=296.82, samples=9 00:20:27.608 lat (usec) : 750=0.11%, 1000=0.13% 00:20:27.608 lat (msec) : 2=13.71%, 4=36.92%, 10=49.13% 00:20:27.608 cpu : usr=91.06%, sys=7.70%, ctx=73, majf=0, minf=9 00:20:27.608 IO depths : 1=0.1%, 2=6.3%, 4=62.0%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 issued rwts: total=10802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.608 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:27.608 filename0: (groupid=0, jobs=1): err= 0: pid=84236: Wed Dec 11 08:54:34 2024 00:20:27.608 read: IOPS=2011, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5001msec) 00:20:27.608 slat (nsec): min=7220, max=53090, avg=14863.05, stdev=5432.78 00:20:27.608 clat (usec): min=843, max=7388, avg=3930.03, stdev=974.50 00:20:27.608 lat (usec): min=851, max=7404, avg=3944.89, stdev=975.20 00:20:27.608 clat percentiles (usec): 00:20:27.608 | 1.00th=[ 1385], 5.00th=[ 1500], 10.00th=[ 2868], 20.00th=[ 3294], 00:20:27.608 | 30.00th=[ 3556], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4228], 00:20:27.608 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5211], 00:20:27.608 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6915], 99.95th=[ 7308], 00:20:27.608 | 99.99th=[ 7373] 00:20:27.608 bw ( KiB/s): min=12672, max=19776, per=24.83%, avg=15985.78, stdev=2321.27, samples=9 00:20:27.608 iops : min= 1584, max= 2472, avg=1998.22, stdev=290.16, samples=9 00:20:27.608 lat (usec) : 1000=0.02% 00:20:27.608 lat (msec) : 2=6.81%, 4=31.20%, 10=61.97% 00:20:27.608 cpu : usr=91.86%, sys=7.28%, ctx=13, majf=0, minf=9 00:20:27.608 IO depths : 1=0.1%, 2=10.9%, 4=59.3%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 issued rwts: total=10061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.608 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:27.608 filename1: (groupid=0, jobs=1): err= 0: pid=84237: Wed Dec 11 08:54:34 2024 00:20:27.608 read: IOPS=1932, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5001msec) 00:20:27.608 slat (nsec): min=7318, max=57453, avg=16102.08, stdev=5399.84 00:20:27.608 clat (usec): min=927, max=7900, avg=4086.50, stdev=708.81 00:20:27.608 lat (usec): min=934, max=7915, avg=4102.60, stdev=709.28 00:20:27.608 clat percentiles (usec): 00:20:27.608 | 1.00th=[ 1778], 5.00th=[ 2769], 10.00th=[ 3261], 20.00th=[ 3458], 00:20:27.608 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:20:27.608 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5145], 00:20:27.608 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6325], 99.95th=[ 6390], 00:20:27.608 | 99.99th=[ 7898] 00:20:27.608 bw ( KiB/s): min=14624, max=16512, per=23.72%, avg=15272.78, stdev=763.51, samples=9 00:20:27.608 iops : min= 1828, max= 2064, avg=1909.00, stdev=95.35, samples=9 00:20:27.608 lat (usec) : 1000=0.04% 00:20:27.608 lat (msec) : 2=1.24%, 4=28.00%, 10=70.72% 00:20:27.608 cpu : usr=91.88%, sys=7.30%, ctx=60, majf=0, minf=9 00:20:27.608 IO depths : 1=0.1%, 2=14.6%, 4=57.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 issued rwts: total=9665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.608 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:27.608 filename1: (groupid=0, jobs=1): err= 0: pid=84238: Wed Dec 11 08:54:34 2024 00:20:27.608 read: IOPS=1944, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5001msec) 00:20:27.608 slat (nsec): min=3483, max=57416, avg=16391.17, stdev=5356.08 00:20:27.608 clat (usec): min=931, max=7883, avg=4059.72, stdev=741.43 00:20:27.608 lat (usec): min=939, max=7902, avg=4076.11, stdev=741.16 00:20:27.608 clat percentiles (usec): 00:20:27.608 | 1.00th=[ 1663], 5.00th=[ 2671], 10.00th=[ 3228], 20.00th=[ 3425], 00:20:27.608 | 30.00th=[ 3982], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:20:27.608 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5145], 00:20:27.608 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 7767], 99.95th=[ 7832], 00:20:27.608 | 99.99th=[ 7898] 00:20:27.608 bw ( KiB/s): min=14624, max=16976, per=23.91%, avg=15390.22, stdev=936.94, samples=9 00:20:27.608 iops : min= 1828, max= 2122, avg=1923.78, stdev=117.12, samples=9 00:20:27.608 lat (usec) : 1000=0.06% 00:20:27.608 lat (msec) : 2=1.71%, 4=28.63%, 10=69.60% 00:20:27.608 cpu : usr=91.06%, sys=7.82%, ctx=13, majf=0, minf=10 00:20:27.608 IO depths : 1=0.1%, 2=14.0%, 4=58.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.608 issued rwts: total=9725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.608 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:27.608 00:20:27.608 Run status group 0 (all jobs): 00:20:27.608 READ: bw=62.9MiB/s (65.9MB/s), 15.1MiB/s-16.9MiB/s (15.8MB/s-17.7MB/s), io=314MiB (330MB), run=5001-5002msec 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 ************************************ 00:20:27.608 END TEST fio_dif_rand_params 00:20:27.608 ************************************ 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 00:20:27.608 real 0m23.288s 00:20:27.608 user 2m2.898s 00:20:27.608 sys 0m8.720s 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 08:54:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:27.608 08:54:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:27.608 08:54:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 ************************************ 00:20:27.608 START TEST fio_dif_digest 00:20:27.608 ************************************ 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 bdev_null0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:27.609 [2024-12-11 08:54:34.960104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.609 { 00:20:27.609 "params": { 00:20:27.609 "name": "Nvme$subsystem", 00:20:27.609 "trtype": "$TEST_TRANSPORT", 00:20:27.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.609 "adrfam": "ipv4", 00:20:27.609 "trsvcid": "$NVMF_PORT", 00:20:27.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.609 "hdgst": ${hdgst:-false}, 00:20:27.609 "ddgst": ${ddgst:-false} 00:20:27.609 }, 00:20:27.609 "method": "bdev_nvme_attach_controller" 00:20:27.609 } 00:20:27.609 EOF 00:20:27.609 )") 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:27.609 "params": { 00:20:27.609 "name": "Nvme0", 00:20:27.609 "trtype": "tcp", 00:20:27.609 "traddr": "10.0.0.3", 00:20:27.609 "adrfam": "ipv4", 00:20:27.609 "trsvcid": "4420", 00:20:27.609 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.609 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:27.609 "hdgst": true, 00:20:27.609 "ddgst": true 00:20:27.609 }, 00:20:27.609 "method": "bdev_nvme_attach_controller" 00:20:27.609 }' 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:27.609 08:54:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:27.609 08:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:27.609 08:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:27.609 08:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:27.609 08:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.609 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:27.609 ... 00:20:27.609 fio-3.35 00:20:27.609 Starting 3 threads 00:20:39.814 00:20:39.814 filename0: (groupid=0, jobs=1): err= 0: pid=84344: Wed Dec 11 08:54:45 2024 00:20:39.814 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10008msec) 00:20:39.814 slat (nsec): min=7466, max=68050, avg=17381.56, stdev=5986.28 00:20:39.814 clat (usec): min=11909, max=18248, avg=13879.73, stdev=706.64 00:20:39.814 lat (usec): min=11923, max=18268, avg=13897.11, stdev=707.03 00:20:39.814 clat percentiles (usec): 00:20:39.815 | 1.00th=[12518], 5.00th=[12911], 10.00th=[12911], 20.00th=[13173], 00:20:39.815 | 30.00th=[13435], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:20:39.815 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:20:39.815 | 99.00th=[15270], 99.50th=[15533], 99.90th=[18220], 99.95th=[18220], 00:20:39.815 | 99.99th=[18220] 00:20:39.815 bw ( KiB/s): min=26880, max=29184, per=33.43%, avg=27648.00, stdev=809.54, samples=19 00:20:39.815 iops : min= 210, max= 228, avg=216.00, stdev= 6.32, samples=19 00:20:39.815 lat (msec) : 20=100.00% 00:20:39.815 cpu : usr=91.24%, sys=8.20%, ctx=75, majf=0, minf=0 00:20:39.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:39.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.815 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:39.815 filename0: (groupid=0, jobs=1): err= 0: pid=84345: Wed Dec 11 08:54:45 2024 00:20:39.815 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10014msec) 00:20:39.815 slat (nsec): min=4991, max=62695, avg=16484.96, stdev=6133.03 00:20:39.815 clat (usec): min=11950, max=20159, avg=13889.51, stdev=743.68 00:20:39.815 lat (usec): min=11965, max=20184, avg=13906.00, stdev=744.05 00:20:39.815 clat percentiles (usec): 00:20:39.815 | 1.00th=[12518], 5.00th=[12911], 10.00th=[12911], 20.00th=[13173], 00:20:39.815 | 30.00th=[13435], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:20:39.815 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:20:39.815 | 99.00th=[15401], 99.50th=[15533], 99.90th=[20055], 99.95th=[20055], 00:20:39.815 | 99.99th=[20055] 00:20:39.815 bw ( KiB/s): min=26112, max=29184, per=33.33%, avg=27568.35, stdev=926.34, samples=20 00:20:39.815 iops : min= 204, max= 228, avg=215.35, stdev= 7.21, samples=20 00:20:39.815 lat (msec) : 20=99.86%, 50=0.14% 00:20:39.815 cpu : usr=91.13%, sys=8.20%, ctx=64, majf=0, minf=0 00:20:39.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:39.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.815 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:39.815 filename0: (groupid=0, jobs=1): err= 0: pid=84346: Wed Dec 11 08:54:45 2024 00:20:39.815 read: IOPS=215, BW=26.9MiB/s (28.3MB/s)(270MiB/10007msec) 00:20:39.815 slat (nsec): min=7405, max=63668, avg=17362.61, stdev=5842.20 00:20:39.815 clat (usec): min=11918, max=18240, avg=13878.89, stdev=706.02 00:20:39.815 lat (usec): min=11942, max=18258, avg=13896.25, stdev=706.35 00:20:39.815 clat percentiles (usec): 00:20:39.815 | 1.00th=[12518], 5.00th=[12780], 10.00th=[12911], 20.00th=[13042], 00:20:39.815 | 30.00th=[13435], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:20:39.815 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:20:39.815 | 99.00th=[15270], 99.50th=[15533], 99.90th=[18220], 99.95th=[18220], 00:20:39.815 | 99.99th=[18220] 00:20:39.815 bw ( KiB/s): min=26880, max=29184, per=33.43%, avg=27650.95, stdev=812.59, samples=19 00:20:39.815 iops : min= 210, max= 228, avg=216.00, stdev= 6.32, samples=19 00:20:39.815 lat (msec) : 20=100.00% 00:20:39.815 cpu : usr=92.03%, sys=7.42%, ctx=46, majf=0, minf=0 00:20:39.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:39.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.815 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:39.815 00:20:39.815 Run status group 0 (all jobs): 00:20:39.815 READ: bw=80.8MiB/s (84.7MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.3MB/s), io=809MiB (848MB), run=10007-10014msec 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 ************************************ 00:20:39.815 END TEST fio_dif_digest 00:20:39.815 ************************************ 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.815 00:20:39.815 real 0m10.949s 00:20:39.815 user 0m28.086s 00:20:39.815 sys 0m2.614s 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.815 08:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 08:54:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:39.815 08:54:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.815 rmmod nvme_tcp 00:20:39.815 rmmod nvme_fabrics 00:20:39.815 rmmod nvme_keyring 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83601 ']' 00:20:39.815 08:54:45 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83601 00:20:39.815 08:54:45 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83601 ']' 00:20:39.815 08:54:45 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83601 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83601 00:20:39.815 killing process with pid 83601 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83601' 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83601 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83601 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:39.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.815 Waiting for block devices as requested 00:20:39.815 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.815 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:39.815 08:54:46 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:39.815 08:54:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.815 08:54:47 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:39.815 00:20:39.815 real 0m58.909s 00:20:39.815 user 3m46.240s 00:20:39.815 sys 0m19.782s 00:20:39.815 08:54:47 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.815 08:54:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 ************************************ 00:20:39.815 END TEST nvmf_dif 00:20:39.815 ************************************ 00:20:39.815 08:54:47 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:39.815 08:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:39.815 08:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.815 08:54:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 ************************************ 00:20:39.815 START TEST nvmf_abort_qd_sizes 00:20:39.815 ************************************ 00:20:39.815 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:39.815 * Looking for test storage... 00:20:39.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.816 --rc genhtml_branch_coverage=1 00:20:39.816 --rc genhtml_function_coverage=1 00:20:39.816 --rc genhtml_legend=1 00:20:39.816 --rc geninfo_all_blocks=1 00:20:39.816 --rc geninfo_unexecuted_blocks=1 00:20:39.816 00:20:39.816 ' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.816 --rc genhtml_branch_coverage=1 00:20:39.816 --rc genhtml_function_coverage=1 00:20:39.816 --rc genhtml_legend=1 00:20:39.816 --rc geninfo_all_blocks=1 00:20:39.816 --rc geninfo_unexecuted_blocks=1 00:20:39.816 00:20:39.816 ' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.816 --rc genhtml_branch_coverage=1 00:20:39.816 --rc genhtml_function_coverage=1 00:20:39.816 --rc genhtml_legend=1 00:20:39.816 --rc geninfo_all_blocks=1 00:20:39.816 --rc geninfo_unexecuted_blocks=1 00:20:39.816 00:20:39.816 ' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.816 --rc genhtml_branch_coverage=1 00:20:39.816 --rc genhtml_function_coverage=1 00:20:39.816 --rc genhtml_legend=1 00:20:39.816 --rc geninfo_all_blocks=1 00:20:39.816 --rc geninfo_unexecuted_blocks=1 00:20:39.816 00:20:39.816 ' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.816 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.816 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:39.817 Cannot find device "nvmf_init_br" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:39.817 Cannot find device "nvmf_init_br2" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:39.817 Cannot find device "nvmf_tgt_br" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.817 Cannot find device "nvmf_tgt_br2" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:39.817 Cannot find device "nvmf_init_br" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:39.817 Cannot find device "nvmf_init_br2" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:39.817 Cannot find device "nvmf_tgt_br" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:39.817 Cannot find device "nvmf_tgt_br2" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:39.817 Cannot find device "nvmf_br" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:39.817 Cannot find device "nvmf_init_if" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:39.817 Cannot find device "nvmf_init_if2" 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.817 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:40.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:20:40.076 00:20:40.076 --- 10.0.0.3 ping statistics --- 00:20:40.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.076 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:40.076 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:40.076 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:40.076 00:20:40.076 --- 10.0.0.4 ping statistics --- 00:20:40.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.076 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:40.076 00:20:40.076 --- 10.0.0.1 ping statistics --- 00:20:40.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.076 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:40.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:20:40.076 00:20:40.076 --- 10.0.0.2 ping statistics --- 00:20:40.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.076 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:40.076 08:54:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:40.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.644 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:40.903 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=85004 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 85004 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 85004 ']' 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.903 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:40.903 [2024-12-11 08:54:48.614384] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:20:40.903 [2024-12-11 08:54:48.614482] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.162 [2024-12-11 08:54:48.769976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.162 [2024-12-11 08:54:48.810888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.162 [2024-12-11 08:54:48.810960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.162 [2024-12-11 08:54:48.810982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.162 [2024-12-11 08:54:48.810992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.162 [2024-12-11 08:54:48.811001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.162 [2024-12-11 08:54:48.811966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.162 [2024-12-11 08:54:48.812022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.162 [2024-12-11 08:54:48.812210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.162 [2024-12-11 08:54:48.812213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.162 [2024-12-11 08:54:48.845428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.162 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.162 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:20:41.162 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.162 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.162 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:41.421 08:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:41.422 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:41.422 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:41.422 08:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:41.422 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.422 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.422 08:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 ************************************ 00:20:41.422 START TEST spdk_target_abort 00:20:41.422 ************************************ 00:20:41.422 08:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:20:41.422 08:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:41.422 08:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:41.422 08:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.422 08:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 spdk_targetn1 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 [2024-12-11 08:54:49.063797] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 [2024-12-11 08:54:49.105628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:41.422 08:54:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:44.709 Initializing NVMe Controllers 00:20:44.709 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:44.709 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:44.709 Initialization complete. Launching workers. 00:20:44.709 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10375, failed: 0 00:20:44.709 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1057, failed to submit 9318 00:20:44.709 success 813, unsuccessful 244, failed 0 00:20:44.709 08:54:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:44.709 08:54:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.902 Initializing NVMe Controllers 00:20:48.902 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.902 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.902 Initialization complete. Launching workers. 00:20:48.902 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:20:48.902 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1187, failed to submit 7693 00:20:48.902 success 385, unsuccessful 802, failed 0 00:20:48.902 08:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.902 08:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:51.436 Initializing NVMe Controllers 00:20:51.436 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:51.436 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:51.436 Initialization complete. Launching workers. 00:20:51.436 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31133, failed: 0 00:20:51.436 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2378, failed to submit 28755 00:20:51.436 success 481, unsuccessful 1897, failed 0 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.436 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:51.695 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.695 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 85004 00:20:51.695 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 85004 ']' 00:20:51.695 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 85004 00:20:51.695 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85004 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.954 killing process with pid 85004 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85004' 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 85004 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 85004 00:20:51.954 00:20:51.954 real 0m10.659s 00:20:51.954 user 0m40.831s 00:20:51.954 sys 0m2.090s 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:51.954 ************************************ 00:20:51.954 END TEST spdk_target_abort 00:20:51.954 ************************************ 00:20:51.954 08:54:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:51.954 08:54:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.954 08:54:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.954 08:54:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:51.954 ************************************ 00:20:51.954 START TEST kernel_target_abort 00:20:51.954 ************************************ 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:51.954 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:52.213 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:52.213 08:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:52.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:52.472 Waiting for block devices as requested 00:20:52.472 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:52.472 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:52.731 No valid GPT data, bailing 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:52.731 No valid GPT data, bailing 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:52.731 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:52.991 No valid GPT data, bailing 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:52.991 No valid GPT data, bailing 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce --hostid=19057b12-55d1-482d-ac95-8c26bd7da4ce -a 10.0.0.1 -t tcp -s 4420 00:20:52.991 00:20:52.991 Discovery Log Number of Records 2, Generation counter 2 00:20:52.991 =====Discovery Log Entry 0====== 00:20:52.991 trtype: tcp 00:20:52.991 adrfam: ipv4 00:20:52.991 subtype: current discovery subsystem 00:20:52.991 treq: not specified, sq flow control disable supported 00:20:52.991 portid: 1 00:20:52.991 trsvcid: 4420 00:20:52.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:52.991 traddr: 10.0.0.1 00:20:52.991 eflags: none 00:20:52.991 sectype: none 00:20:52.991 =====Discovery Log Entry 1====== 00:20:52.991 trtype: tcp 00:20:52.991 adrfam: ipv4 00:20:52.991 subtype: nvme subsystem 00:20:52.991 treq: not specified, sq flow control disable supported 00:20:52.991 portid: 1 00:20:52.991 trsvcid: 4420 00:20:52.991 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:52.991 traddr: 10.0.0.1 00:20:52.991 eflags: none 00:20:52.991 sectype: none 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:52.991 08:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:56.280 Initializing NVMe Controllers 00:20:56.280 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:56.280 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:56.280 Initialization complete. Launching workers. 00:20:56.280 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30896, failed: 0 00:20:56.280 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30896, failed to submit 0 00:20:56.280 success 0, unsuccessful 30896, failed 0 00:20:56.280 08:55:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:56.280 08:55:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:59.608 Initializing NVMe Controllers 00:20:59.608 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:59.608 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:59.608 Initialization complete. Launching workers. 00:20:59.608 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63010, failed: 0 00:20:59.608 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26343, failed to submit 36667 00:20:59.608 success 0, unsuccessful 26343, failed 0 00:20:59.608 08:55:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:59.608 08:55:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:02.896 Initializing NVMe Controllers 00:21:02.896 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:02.896 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:02.896 Initialization complete. Launching workers. 00:21:02.896 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69370, failed: 0 00:21:02.896 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17342, failed to submit 52028 00:21:02.896 success 0, unsuccessful 17342, failed 0 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:02.896 08:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:03.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.059 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:05.059 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:05.059 00:21:05.059 real 0m12.723s 00:21:05.059 user 0m6.114s 00:21:05.059 sys 0m4.063s 00:21:05.059 08:55:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.059 08:55:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:05.059 ************************************ 00:21:05.059 END TEST kernel_target_abort 00:21:05.059 ************************************ 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.059 rmmod nvme_tcp 00:21:05.059 rmmod nvme_fabrics 00:21:05.059 rmmod nvme_keyring 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 85004 ']' 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 85004 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 85004 ']' 00:21:05.059 Process with pid 85004 is not found 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 85004 00:21:05.059 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85004) - No such process 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 85004 is not found' 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:05.059 08:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:05.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.318 Waiting for block devices as requested 00:21:05.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.318 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:05.577 08:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.836 08:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:05.836 00:21:05.836 real 0m26.311s 00:21:05.836 user 0m48.079s 00:21:05.836 sys 0m7.527s 00:21:05.836 08:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.836 08:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:05.836 ************************************ 00:21:05.836 END TEST nvmf_abort_qd_sizes 00:21:05.836 ************************************ 00:21:05.836 08:55:13 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:05.836 08:55:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.836 08:55:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.836 08:55:13 -- common/autotest_common.sh@10 -- # set +x 00:21:05.836 ************************************ 00:21:05.836 START TEST keyring_file 00:21:05.836 ************************************ 00:21:05.836 08:55:13 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:05.836 * Looking for test storage... 00:21:05.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:05.836 08:55:13 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:05.836 08:55:13 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:21:05.836 08:55:13 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:05.836 08:55:13 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.836 08:55:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:06.096 08:55:13 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.096 08:55:13 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.096 --rc genhtml_branch_coverage=1 00:21:06.096 --rc genhtml_function_coverage=1 00:21:06.096 --rc genhtml_legend=1 00:21:06.096 --rc geninfo_all_blocks=1 00:21:06.096 --rc geninfo_unexecuted_blocks=1 00:21:06.096 00:21:06.096 ' 00:21:06.096 08:55:13 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.096 --rc genhtml_branch_coverage=1 00:21:06.096 --rc genhtml_function_coverage=1 00:21:06.096 --rc genhtml_legend=1 00:21:06.096 --rc geninfo_all_blocks=1 00:21:06.096 --rc geninfo_unexecuted_blocks=1 00:21:06.096 00:21:06.096 ' 00:21:06.096 08:55:13 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.096 --rc genhtml_branch_coverage=1 00:21:06.096 --rc genhtml_function_coverage=1 00:21:06.096 --rc genhtml_legend=1 00:21:06.096 --rc geninfo_all_blocks=1 00:21:06.096 --rc geninfo_unexecuted_blocks=1 00:21:06.096 00:21:06.096 ' 00:21:06.096 08:55:13 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.096 --rc genhtml_branch_coverage=1 00:21:06.096 --rc genhtml_function_coverage=1 00:21:06.096 --rc genhtml_legend=1 00:21:06.096 --rc geninfo_all_blocks=1 00:21:06.096 --rc geninfo_unexecuted_blocks=1 00:21:06.096 00:21:06.096 ' 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.096 08:55:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.096 08:55:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.096 08:55:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.096 08:55:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.096 08:55:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:06.096 08:55:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.096 08:55:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:06.096 08:55:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YbbqZNRE33 00:21:06.096 08:55:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YbbqZNRE33 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YbbqZNRE33 00:21:06.097 08:55:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YbbqZNRE33 00:21:06.097 08:55:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nMOW1jRKMZ 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:06.097 08:55:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nMOW1jRKMZ 00:21:06.097 08:55:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nMOW1jRKMZ 00:21:06.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.097 08:55:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.nMOW1jRKMZ 00:21:06.097 08:55:13 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:06.097 08:55:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=85907 00:21:06.097 08:55:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85907 00:21:06.097 08:55:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85907 ']' 00:21:06.097 08:55:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.097 08:55:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.097 08:55:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.097 08:55:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.097 08:55:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:06.097 [2024-12-11 08:55:13.833504] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:21:06.097 [2024-12-11 08:55:13.833616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85907 ] 00:21:06.356 [2024-12-11 08:55:13.984881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.356 [2024-12-11 08:55:14.023542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.356 [2024-12-11 08:55:14.069474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:06.615 08:55:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:06.615 [2024-12-11 08:55:14.208749] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.615 null0 00:21:06.615 [2024-12-11 08:55:14.240707] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.615 [2024-12-11 08:55:14.240882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.615 08:55:14 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:06.615 [2024-12-11 08:55:14.272651] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:06.615 request: 00:21:06.615 { 00:21:06.615 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:06.615 "secure_channel": false, 00:21:06.615 "listen_address": { 00:21:06.615 "trtype": "tcp", 00:21:06.615 "traddr": "127.0.0.1", 00:21:06.615 "trsvcid": "4420" 00:21:06.615 }, 00:21:06.615 "method": "nvmf_subsystem_add_listener", 00:21:06.615 "req_id": 1 00:21:06.615 } 00:21:06.615 Got JSON-RPC error response 00:21:06.615 response: 00:21:06.615 { 00:21:06.615 "code": -32602, 00:21:06.615 "message": "Invalid parameters" 00:21:06.615 } 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:06.615 08:55:14 keyring_file -- keyring/file.sh@47 -- # bperfpid=85912 00:21:06.615 08:55:14 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85912 /var/tmp/bperf.sock 00:21:06.615 08:55:14 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85912 ']' 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:06.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.615 08:55:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:06.615 [2024-12-11 08:55:14.340363] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:21:06.615 [2024-12-11 08:55:14.340454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85912 ] 00:21:06.873 [2024-12-11 08:55:14.495156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.873 [2024-12-11 08:55:14.535400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.873 [2024-12-11 08:55:14.570943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:06.873 08:55:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.873 08:55:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:06.873 08:55:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:06.874 08:55:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:07.448 08:55:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nMOW1jRKMZ 00:21:07.448 08:55:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nMOW1jRKMZ 00:21:07.448 08:55:15 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:07.448 08:55:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:07.448 08:55:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.448 08:55:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.448 08:55:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.021 08:55:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YbbqZNRE33 == \/\t\m\p\/\t\m\p\.\Y\b\b\q\Z\N\R\E\3\3 ]] 00:21:08.021 08:55:15 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:08.021 08:55:15 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:08.021 08:55:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.021 08:55:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.021 08:55:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.280 08:55:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.nMOW1jRKMZ == \/\t\m\p\/\t\m\p\.\n\M\O\W\1\j\R\K\M\Z ]] 00:21:08.280 08:55:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:08.280 08:55:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:08.280 08:55:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.280 08:55:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.280 08:55:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.280 08:55:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.280 08:55:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:08.280 08:55:16 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:08.280 08:55:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:08.280 08:55:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.280 08:55:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.280 08:55:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.280 08:55:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.848 08:55:16 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:08.848 08:55:16 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.848 08:55:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.848 [2024-12-11 08:55:16.554055] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.107 nvme0n1 00:21:09.107 08:55:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:09.107 08:55:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:09.107 08:55:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:09.107 08:55:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:09.107 08:55:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:09.107 08:55:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.366 08:55:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:09.366 08:55:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:09.366 08:55:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:09.366 08:55:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:09.366 08:55:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:09.366 08:55:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.366 08:55:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:09.625 08:55:17 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:09.625 08:55:17 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:09.625 Running I/O for 1 seconds... 00:21:10.561 12351.00 IOPS, 48.25 MiB/s 00:21:10.561 Latency(us) 00:21:10.561 [2024-12-11T08:55:18.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.561 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:10.561 nvme0n1 : 1.01 12401.71 48.44 0.00 0.00 10295.24 4379.00 24307.90 00:21:10.561 [2024-12-11T08:55:18.335Z] =================================================================================================================== 00:21:10.561 [2024-12-11T08:55:18.335Z] Total : 12401.71 48.44 0.00 0.00 10295.24 4379.00 24307.90 00:21:10.561 { 00:21:10.561 "results": [ 00:21:10.561 { 00:21:10.561 "job": "nvme0n1", 00:21:10.561 "core_mask": "0x2", 00:21:10.561 "workload": "randrw", 00:21:10.561 "percentage": 50, 00:21:10.561 "status": "finished", 00:21:10.561 "queue_depth": 128, 00:21:10.561 "io_size": 4096, 00:21:10.561 "runtime": 1.006313, 00:21:10.561 "iops": 12401.708017286868, 00:21:10.561 "mibps": 48.44417194252683, 00:21:10.561 "io_failed": 0, 00:21:10.561 "io_timeout": 0, 00:21:10.561 "avg_latency_us": 10295.243487179487, 00:21:10.561 "min_latency_us": 4378.996363636364, 00:21:10.561 "max_latency_us": 24307.898181818182 00:21:10.561 } 00:21:10.561 ], 00:21:10.561 "core_count": 1 00:21:10.561 } 00:21:10.561 08:55:18 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:10.561 08:55:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:11.129 08:55:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:11.129 08:55:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.129 08:55:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.129 08:55:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.129 08:55:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.129 08:55:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.388 08:55:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:11.388 08:55:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:11.388 08:55:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:11.388 08:55:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.388 08:55:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.388 08:55:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.388 08:55:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:11.649 08:55:19 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:11.649 08:55:19 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.649 08:55:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.649 08:55:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.911 [2024-12-11 08:55:19.485009] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:11.911 [2024-12-11 08:55:19.485955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63ce0 (107): Transport endpoint is not connected 00:21:11.911 [2024-12-11 08:55:19.486943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63ce0 (9): Bad file descriptor 00:21:11.911 [2024-12-11 08:55:19.487940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:11.911 [2024-12-11 08:55:19.488002] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:11.911 [2024-12-11 08:55:19.488030] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:11.911 [2024-12-11 08:55:19.488055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:11.911 request: 00:21:11.911 { 00:21:11.911 "name": "nvme0", 00:21:11.911 "trtype": "tcp", 00:21:11.911 "traddr": "127.0.0.1", 00:21:11.911 "adrfam": "ipv4", 00:21:11.911 "trsvcid": "4420", 00:21:11.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:11.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:11.911 "prchk_reftag": false, 00:21:11.911 "prchk_guard": false, 00:21:11.911 "hdgst": false, 00:21:11.911 "ddgst": false, 00:21:11.911 "psk": "key1", 00:21:11.911 "allow_unrecognized_csi": false, 00:21:11.911 "method": "bdev_nvme_attach_controller", 00:21:11.911 "req_id": 1 00:21:11.911 } 00:21:11.911 Got JSON-RPC error response 00:21:11.911 response: 00:21:11.911 { 00:21:11.911 "code": -5, 00:21:11.911 "message": "Input/output error" 00:21:11.911 } 00:21:11.911 08:55:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:11.911 08:55:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:11.911 08:55:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:11.911 08:55:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:11.911 08:55:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:11.911 08:55:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.911 08:55:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.911 08:55:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.911 08:55:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.911 08:55:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.169 08:55:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:12.169 08:55:19 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:12.169 08:55:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:12.169 08:55:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:12.169 08:55:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.170 08:55:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.170 08:55:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:12.429 08:55:20 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:12.429 08:55:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:12.429 08:55:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:12.688 08:55:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:12.688 08:55:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:12.947 08:55:20 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:12.947 08:55:20 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:12.947 08:55:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.206 08:55:20 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:13.206 08:55:20 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.YbbqZNRE33 00:21:13.206 08:55:20 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.206 08:55:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:13.206 08:55:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:13.465 [2024-12-11 08:55:21.156540] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YbbqZNRE33': 0100660 00:21:13.465 [2024-12-11 08:55:21.156629] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:13.465 request: 00:21:13.465 { 00:21:13.465 "name": "key0", 00:21:13.465 "path": "/tmp/tmp.YbbqZNRE33", 00:21:13.465 "method": "keyring_file_add_key", 00:21:13.465 "req_id": 1 00:21:13.465 } 00:21:13.465 Got JSON-RPC error response 00:21:13.465 response: 00:21:13.465 { 00:21:13.465 "code": -1, 00:21:13.465 "message": "Operation not permitted" 00:21:13.465 } 00:21:13.465 08:55:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:13.465 08:55:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.465 08:55:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.465 08:55:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.465 08:55:21 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.YbbqZNRE33 00:21:13.465 08:55:21 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:13.465 08:55:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YbbqZNRE33 00:21:13.724 08:55:21 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.YbbqZNRE33 00:21:13.724 08:55:21 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:13.724 08:55:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.724 08:55:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:13.724 08:55:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.724 08:55:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:13.724 08:55:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.292 08:55:21 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:14.292 08:55:21 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.292 08:55:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:14.292 08:55:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.292 08:55:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:14.292 08:55:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.292 08:55:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:14.293 08:55:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.293 08:55:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.293 08:55:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.293 [2024-12-11 08:55:21.996837] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YbbqZNRE33': No such file or directory 00:21:14.293 [2024-12-11 08:55:21.996914] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:14.293 [2024-12-11 08:55:21.996936] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:14.293 [2024-12-11 08:55:21.996946] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:14.293 [2024-12-11 08:55:21.996956] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:14.293 [2024-12-11 08:55:21.996967] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:14.293 request: 00:21:14.293 { 00:21:14.293 "name": "nvme0", 00:21:14.293 "trtype": "tcp", 00:21:14.293 "traddr": "127.0.0.1", 00:21:14.293 "adrfam": "ipv4", 00:21:14.293 "trsvcid": "4420", 00:21:14.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:14.293 "prchk_reftag": false, 00:21:14.293 "prchk_guard": false, 00:21:14.293 "hdgst": false, 00:21:14.293 "ddgst": false, 00:21:14.293 "psk": "key0", 00:21:14.293 "allow_unrecognized_csi": false, 00:21:14.293 "method": "bdev_nvme_attach_controller", 00:21:14.293 "req_id": 1 00:21:14.293 } 00:21:14.293 Got JSON-RPC error response 00:21:14.293 response: 00:21:14.293 { 00:21:14.293 "code": -19, 00:21:14.293 "message": "No such device" 00:21:14.293 } 00:21:14.293 08:55:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:14.293 08:55:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.293 08:55:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.293 08:55:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.293 08:55:22 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:14.293 08:55:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:14.552 08:55:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Odp3zLcSZq 00:21:14.552 08:55:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:14.552 08:55:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:14.552 08:55:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.552 08:55:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:14.552 08:55:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:14.552 08:55:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:14.552 08:55:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:14.810 08:55:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Odp3zLcSZq 00:21:14.810 08:55:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Odp3zLcSZq 00:21:14.810 08:55:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Odp3zLcSZq 00:21:14.810 08:55:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Odp3zLcSZq 00:21:14.810 08:55:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Odp3zLcSZq 00:21:15.069 08:55:22 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:15.069 08:55:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:15.328 nvme0n1 00:21:15.328 08:55:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:15.328 08:55:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:15.328 08:55:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.328 08:55:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.328 08:55:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.328 08:55:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.587 08:55:23 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:15.587 08:55:23 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:15.587 08:55:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:15.846 08:55:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:15.846 08:55:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:15.846 08:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.846 08:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.846 08:55:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.413 08:55:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:16.413 08:55:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:16.413 08:55:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:16.413 08:55:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:16.413 08:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.413 08:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:16.413 08:55:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.413 08:55:24 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:16.413 08:55:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:16.413 08:55:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:16.671 08:55:24 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:16.671 08:55:24 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:16.671 08:55:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.930 08:55:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:16.930 08:55:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Odp3zLcSZq 00:21:16.930 08:55:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Odp3zLcSZq 00:21:17.189 08:55:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nMOW1jRKMZ 00:21:17.189 08:55:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nMOW1jRKMZ 00:21:17.448 08:55:25 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.448 08:55:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.706 nvme0n1 00:21:17.706 08:55:25 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:17.706 08:55:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:17.964 08:55:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:17.965 "subsystems": [ 00:21:17.965 { 00:21:17.965 "subsystem": "keyring", 00:21:17.965 "config": [ 00:21:17.965 { 00:21:17.965 "method": "keyring_file_add_key", 00:21:17.965 "params": { 00:21:17.965 "name": "key0", 00:21:17.965 "path": "/tmp/tmp.Odp3zLcSZq" 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "keyring_file_add_key", 00:21:17.965 "params": { 00:21:17.965 "name": "key1", 00:21:17.965 "path": "/tmp/tmp.nMOW1jRKMZ" 00:21:17.965 } 00:21:17.965 } 00:21:17.965 ] 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "subsystem": "iobuf", 00:21:17.965 "config": [ 00:21:17.965 { 00:21:17.965 "method": "iobuf_set_options", 00:21:17.965 "params": { 00:21:17.965 "small_pool_count": 8192, 00:21:17.965 "large_pool_count": 1024, 00:21:17.965 "small_bufsize": 8192, 00:21:17.965 "large_bufsize": 135168, 00:21:17.965 "enable_numa": false 00:21:17.965 } 00:21:17.965 } 00:21:17.965 ] 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "subsystem": "sock", 00:21:17.965 "config": [ 00:21:17.965 { 00:21:17.965 "method": "sock_set_default_impl", 00:21:17.965 "params": { 00:21:17.965 "impl_name": "uring" 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "sock_impl_set_options", 00:21:17.965 "params": { 00:21:17.965 "impl_name": "ssl", 00:21:17.965 "recv_buf_size": 4096, 00:21:17.965 "send_buf_size": 4096, 00:21:17.965 "enable_recv_pipe": true, 00:21:17.965 "enable_quickack": false, 00:21:17.965 "enable_placement_id": 0, 00:21:17.965 "enable_zerocopy_send_server": true, 00:21:17.965 "enable_zerocopy_send_client": false, 00:21:17.965 "zerocopy_threshold": 0, 00:21:17.965 "tls_version": 0, 00:21:17.965 "enable_ktls": false 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "sock_impl_set_options", 00:21:17.965 "params": { 00:21:17.965 "impl_name": "posix", 00:21:17.965 "recv_buf_size": 2097152, 00:21:17.965 "send_buf_size": 2097152, 00:21:17.965 "enable_recv_pipe": true, 00:21:17.965 "enable_quickack": false, 00:21:17.965 "enable_placement_id": 0, 00:21:17.965 "enable_zerocopy_send_server": true, 00:21:17.965 "enable_zerocopy_send_client": false, 00:21:17.965 "zerocopy_threshold": 0, 00:21:17.965 "tls_version": 0, 00:21:17.965 "enable_ktls": false 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "sock_impl_set_options", 00:21:17.965 "params": { 00:21:17.965 "impl_name": "uring", 00:21:17.965 "recv_buf_size": 2097152, 00:21:17.965 "send_buf_size": 2097152, 00:21:17.965 "enable_recv_pipe": true, 00:21:17.965 "enable_quickack": false, 00:21:17.965 "enable_placement_id": 0, 00:21:17.965 "enable_zerocopy_send_server": false, 00:21:17.965 "enable_zerocopy_send_client": false, 00:21:17.965 "zerocopy_threshold": 0, 00:21:17.965 "tls_version": 0, 00:21:17.965 "enable_ktls": false 00:21:17.965 } 00:21:17.965 } 00:21:17.965 ] 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "subsystem": "vmd", 00:21:17.965 "config": [] 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "subsystem": "accel", 00:21:17.965 "config": [ 00:21:17.965 { 00:21:17.965 "method": "accel_set_options", 00:21:17.965 "params": { 00:21:17.965 "small_cache_size": 128, 00:21:17.965 "large_cache_size": 16, 00:21:17.965 "task_count": 2048, 00:21:17.965 "sequence_count": 2048, 00:21:17.965 "buf_count": 2048 00:21:17.965 } 00:21:17.965 } 00:21:17.965 ] 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "subsystem": "bdev", 00:21:17.965 "config": [ 00:21:17.965 { 00:21:17.965 "method": "bdev_set_options", 00:21:17.965 "params": { 00:21:17.965 "bdev_io_pool_size": 65535, 00:21:17.965 "bdev_io_cache_size": 256, 00:21:17.965 "bdev_auto_examine": true, 00:21:17.965 "iobuf_small_cache_size": 128, 00:21:17.965 "iobuf_large_cache_size": 16 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "bdev_raid_set_options", 00:21:17.965 "params": { 00:21:17.965 "process_window_size_kb": 1024, 00:21:17.965 "process_max_bandwidth_mb_sec": 0 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "bdev_iscsi_set_options", 00:21:17.965 "params": { 00:21:17.965 "timeout_sec": 30 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "bdev_nvme_set_options", 00:21:17.965 "params": { 00:21:17.965 "action_on_timeout": "none", 00:21:17.965 "timeout_us": 0, 00:21:17.965 "timeout_admin_us": 0, 00:21:17.965 "keep_alive_timeout_ms": 10000, 00:21:17.965 "arbitration_burst": 0, 00:21:17.965 "low_priority_weight": 0, 00:21:17.965 "medium_priority_weight": 0, 00:21:17.965 "high_priority_weight": 0, 00:21:17.965 "nvme_adminq_poll_period_us": 10000, 00:21:17.965 "nvme_ioq_poll_period_us": 0, 00:21:17.965 "io_queue_requests": 512, 00:21:17.965 "delay_cmd_submit": true, 00:21:17.965 "transport_retry_count": 4, 00:21:17.965 "bdev_retry_count": 3, 00:21:17.965 "transport_ack_timeout": 0, 00:21:17.965 "ctrlr_loss_timeout_sec": 0, 00:21:17.965 "reconnect_delay_sec": 0, 00:21:17.965 "fast_io_fail_timeout_sec": 0, 00:21:17.965 "disable_auto_failback": false, 00:21:17.965 "generate_uuids": false, 00:21:17.965 "transport_tos": 0, 00:21:17.965 "nvme_error_stat": false, 00:21:17.965 "rdma_srq_size": 0, 00:21:17.965 "io_path_stat": false, 00:21:17.965 "allow_accel_sequence": false, 00:21:17.965 "rdma_max_cq_size": 0, 00:21:17.965 "rdma_cm_event_timeout_ms": 0, 00:21:17.965 "dhchap_digests": [ 00:21:17.965 "sha256", 00:21:17.965 "sha384", 00:21:17.965 "sha512" 00:21:17.965 ], 00:21:17.965 "dhchap_dhgroups": [ 00:21:17.965 "null", 00:21:17.965 "ffdhe2048", 00:21:17.965 "ffdhe3072", 00:21:17.965 "ffdhe4096", 00:21:17.965 "ffdhe6144", 00:21:17.965 "ffdhe8192" 00:21:17.965 ], 00:21:17.965 "rdma_umr_per_io": false 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "bdev_nvme_attach_controller", 00:21:17.965 "params": { 00:21:17.965 "name": "nvme0", 00:21:17.965 "trtype": "TCP", 00:21:17.965 "adrfam": "IPv4", 00:21:17.965 "traddr": "127.0.0.1", 00:21:17.965 "trsvcid": "4420", 00:21:17.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.965 "prchk_reftag": false, 00:21:17.965 "prchk_guard": false, 00:21:17.965 "ctrlr_loss_timeout_sec": 0, 00:21:17.965 "reconnect_delay_sec": 0, 00:21:17.965 "fast_io_fail_timeout_sec": 0, 00:21:17.965 "psk": "key0", 00:21:17.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.965 "hdgst": false, 00:21:17.965 "ddgst": false, 00:21:17.965 "multipath": "multipath" 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "bdev_nvme_set_hotplug", 00:21:17.965 "params": { 00:21:17.965 "period_us": 100000, 00:21:17.965 "enable": false 00:21:17.965 } 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "method": "bdev_wait_for_examine" 00:21:17.965 } 00:21:17.965 ] 00:21:17.965 }, 00:21:17.965 { 00:21:17.965 "subsystem": "nbd", 00:21:17.965 "config": [] 00:21:17.965 } 00:21:17.965 ] 00:21:17.965 }' 00:21:17.965 08:55:25 keyring_file -- keyring/file.sh@115 -- # killprocess 85912 00:21:17.965 08:55:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85912 ']' 00:21:17.965 08:55:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85912 00:21:17.965 08:55:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:17.965 08:55:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.965 08:55:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85912 00:21:18.225 killing process with pid 85912 00:21:18.225 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.225 00:21:18.225 Latency(us) 00:21:18.225 [2024-12-11T08:55:25.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.225 [2024-12-11T08:55:25.999Z] =================================================================================================================== 00:21:18.225 [2024-12-11T08:55:25.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.225 08:55:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:18.225 08:55:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:18.225 08:55:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85912' 00:21:18.225 08:55:25 keyring_file -- common/autotest_common.sh@973 -- # kill 85912 00:21:18.225 08:55:25 keyring_file -- common/autotest_common.sh@978 -- # wait 85912 00:21:18.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.225 08:55:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=86166 00:21:18.225 08:55:25 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:18.225 08:55:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:18.225 "subsystems": [ 00:21:18.225 { 00:21:18.225 "subsystem": "keyring", 00:21:18.225 "config": [ 00:21:18.225 { 00:21:18.225 "method": "keyring_file_add_key", 00:21:18.225 "params": { 00:21:18.225 "name": "key0", 00:21:18.225 "path": "/tmp/tmp.Odp3zLcSZq" 00:21:18.225 } 00:21:18.225 }, 00:21:18.225 { 00:21:18.225 "method": "keyring_file_add_key", 00:21:18.225 "params": { 00:21:18.225 "name": "key1", 00:21:18.225 "path": "/tmp/tmp.nMOW1jRKMZ" 00:21:18.225 } 00:21:18.225 } 00:21:18.225 ] 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "subsystem": "iobuf", 00:21:18.226 "config": [ 00:21:18.226 { 00:21:18.226 "method": "iobuf_set_options", 00:21:18.226 "params": { 00:21:18.226 "small_pool_count": 8192, 00:21:18.226 "large_pool_count": 1024, 00:21:18.226 "small_bufsize": 8192, 00:21:18.226 "large_bufsize": 135168, 00:21:18.226 "enable_numa": false 00:21:18.226 } 00:21:18.226 } 00:21:18.226 ] 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "subsystem": "sock", 00:21:18.226 "config": [ 00:21:18.226 { 00:21:18.226 "method": "sock_set_default_impl", 00:21:18.226 "params": { 00:21:18.226 "impl_name": "uring" 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "sock_impl_set_options", 00:21:18.226 "params": { 00:21:18.226 "impl_name": "ssl", 00:21:18.226 "recv_buf_size": 4096, 00:21:18.226 "send_buf_size": 4096, 00:21:18.226 "enable_recv_pipe": true, 00:21:18.226 "enable_quickack": false, 00:21:18.226 "enable_placement_id": 0, 00:21:18.226 "enable_zerocopy_send_server": true, 00:21:18.226 "enable_zerocopy_send_client": false, 00:21:18.226 "zerocopy_threshold": 0, 00:21:18.226 "tls_version": 0, 00:21:18.226 "enable_ktls": false 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "sock_impl_set_options", 00:21:18.226 "params": { 00:21:18.226 "impl_name": "posix", 00:21:18.226 "recv_buf_size": 2097152, 00:21:18.226 "send_buf_size": 2097152, 00:21:18.226 "enable_recv_pipe": true, 00:21:18.226 "enable_quickack": false, 00:21:18.226 "enable_placement_id": 0, 00:21:18.226 "enable_zerocopy_send_server": true, 00:21:18.226 "enable_zerocopy_send_client": false, 00:21:18.226 "zerocopy_threshold": 0, 00:21:18.226 "tls_version": 0, 00:21:18.226 "enable_ktls": false 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "sock_impl_set_options", 00:21:18.226 "params": { 00:21:18.226 "impl_name": "uring", 00:21:18.226 "recv_buf_size": 2097152, 00:21:18.226 "send_buf_size": 2097152, 00:21:18.226 "enable_recv_pipe": true, 00:21:18.226 "enable_quickack": false, 00:21:18.226 "enable_placement_id": 0, 00:21:18.226 "enable_zerocopy_send_server": false, 00:21:18.226 "enable_zerocopy_send_client": false, 00:21:18.226 "zerocopy_threshold": 0, 00:21:18.226 "tls_version": 0, 00:21:18.226 "enable_ktls": false 00:21:18.226 } 00:21:18.226 } 00:21:18.226 ] 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "subsystem": "vmd", 00:21:18.226 "config": [] 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "subsystem": "accel", 00:21:18.226 "config": [ 00:21:18.226 { 00:21:18.226 "method": "accel_set_options", 00:21:18.226 "params": { 00:21:18.226 "small_cache_size": 128, 00:21:18.226 "large_cache_size": 16, 00:21:18.226 "task_count": 2048, 00:21:18.226 "sequence_count": 2048, 00:21:18.226 "buf_count": 2048 00:21:18.226 } 00:21:18.226 } 00:21:18.226 ] 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "subsystem": "bdev", 00:21:18.226 "config": [ 00:21:18.226 { 00:21:18.226 "method": "bdev_set_options", 00:21:18.226 "params": { 00:21:18.226 "bdev_io_pool_size": 65535, 00:21:18.226 "bdev_io_cache_size": 256, 00:21:18.226 "bdev_auto_examine": true, 00:21:18.226 "iobuf_small_cache_size": 128, 00:21:18.226 "iobuf_large_cache_size": 16 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "bdev_raid_set_options", 00:21:18.226 "params": { 00:21:18.226 "process_window_size_kb": 1024, 00:21:18.226 "process_max_bandwidth_mb_sec": 0 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "bdev_iscsi_set_options", 00:21:18.226 "params": { 00:21:18.226 "timeout_sec": 30 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "bdev_nvme_set_options", 00:21:18.226 "params": { 00:21:18.226 "action_on_timeout": "none", 00:21:18.226 "timeout_us": 0, 00:21:18.226 "timeout_admin_us": 0, 00:21:18.226 "keep_alive_timeout_ms": 10000, 00:21:18.226 "arbitration_burst": 0, 00:21:18.226 "low_priority_weight": 0, 00:21:18.226 "medium_priority_weight": 0, 00:21:18.226 "high_priority_weight": 0, 00:21:18.226 "nvme_adminq_poll_period_us": 10000, 00:21:18.226 "nvme_ioq_poll_period_us": 0, 00:21:18.226 "io_queue_requests": 512, 00:21:18.226 "delay_cmd_submit": true, 00:21:18.226 "transport_retry_count": 4, 00:21:18.226 "bdev_retry_count": 3, 00:21:18.226 "transport_ack_timeout": 0, 00:21:18.226 "ctrlr_loss_timeout_sec": 0, 00:21:18.226 "reconnect_delay_sec": 0, 00:21:18.226 "fast_io_fail_timeout_sec": 0, 00:21:18.226 "disable_auto_failback": false, 00:21:18.226 "generate_uuids": false, 00:21:18.226 "transport_tos": 0, 00:21:18.226 "nvme_error_stat": false, 00:21:18.226 "rdma_srq_size": 0, 00:21:18.226 "io_path_stat": false, 00:21:18.226 "allow_accel_sequence": false, 00:21:18.226 "rdma_max_cq_size": 0, 00:21:18.226 "rdma_cm_event_timeout_ms": 0, 00:21:18.226 "dhchap_digests": [ 00:21:18.226 "sha256", 00:21:18.226 "sha384", 00:21:18.226 "sha512" 00:21:18.226 ], 00:21:18.226 "dhchap_dhgroups": [ 00:21:18.226 "null", 00:21:18.226 "ffdhe2048", 00:21:18.226 "ffdhe3072", 00:21:18.226 "ffdhe4096", 00:21:18.226 "ffdhe6144", 00:21:18.226 "ffdhe8192" 00:21:18.226 ], 00:21:18.226 "rdma_umr_per_io": false 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "bdev_nvme_attach_controller", 00:21:18.226 "params": { 00:21:18.226 "name": "nvme0", 00:21:18.226 "trtype": "TCP", 00:21:18.226 "adrfam": "IPv4", 00:21:18.226 "traddr": "127.0.0.1", 00:21:18.226 "trsvcid": "4420", 00:21:18.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.226 "prchk_reftag": false, 00:21:18.226 "prchk_guard": false, 00:21:18.226 "ctrlr_loss_timeout_sec": 0, 00:21:18.226 "reconnect_delay_sec": 0, 00:21:18.226 "fast_io_fail_timeout_sec": 0, 00:21:18.226 "psk": "key0", 00:21:18.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:18.226 "hdgst": false, 00:21:18.226 "ddgst": false, 00:21:18.226 "multipath": "multipath" 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "bdev_nvme_set_hotplug", 00:21:18.226 "params": { 00:21:18.226 "period_us": 100000, 00:21:18.226 "enable": false 00:21:18.226 } 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "method": "bdev_wait_for_examine" 00:21:18.226 } 00:21:18.226 ] 00:21:18.226 }, 00:21:18.226 { 00:21:18.226 "subsystem": "nbd", 00:21:18.226 "config": [] 00:21:18.226 } 00:21:18.226 ] 00:21:18.226 }' 00:21:18.226 08:55:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86166 /var/tmp/bperf.sock 00:21:18.226 08:55:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86166 ']' 00:21:18.226 08:55:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.226 08:55:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.226 08:55:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.226 08:55:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.226 08:55:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:18.226 [2024-12-11 08:55:25.931251] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:21:18.226 [2024-12-11 08:55:25.931356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86166 ] 00:21:18.485 [2024-12-11 08:55:26.069919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.485 [2024-12-11 08:55:26.102484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.485 [2024-12-11 08:55:26.214563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:18.485 [2024-12-11 08:55:26.255582] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.420 08:55:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.420 08:55:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:19.420 08:55:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:19.420 08:55:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.420 08:55:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:19.678 08:55:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:19.678 08:55:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:19.678 08:55:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.678 08:55:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:19.678 08:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.678 08:55:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.678 08:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:19.937 08:55:27 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:19.937 08:55:27 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:19.937 08:55:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:19.937 08:55:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.937 08:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.937 08:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:19.937 08:55:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.195 08:55:27 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:20.195 08:55:27 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:20.195 08:55:27 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:20.195 08:55:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:20.455 08:55:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:20.455 08:55:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:20.455 08:55:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Odp3zLcSZq /tmp/tmp.nMOW1jRKMZ 00:21:20.455 08:55:28 keyring_file -- keyring/file.sh@20 -- # killprocess 86166 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86166 ']' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86166 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86166 00:21:20.455 killing process with pid 86166 00:21:20.455 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.455 00:21:20.455 Latency(us) 00:21:20.455 [2024-12-11T08:55:28.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.455 [2024-12-11T08:55:28.229Z] =================================================================================================================== 00:21:20.455 [2024-12-11T08:55:28.229Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86166' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@973 -- # kill 86166 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@978 -- # wait 86166 00:21:20.455 08:55:28 keyring_file -- keyring/file.sh@21 -- # killprocess 85907 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85907 ']' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85907 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85907 00:21:20.455 killing process with pid 85907 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85907' 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@973 -- # kill 85907 00:21:20.455 08:55:28 keyring_file -- common/autotest_common.sh@978 -- # wait 85907 00:21:20.714 ************************************ 00:21:20.714 END TEST keyring_file 00:21:20.714 ************************************ 00:21:20.714 00:21:20.714 real 0m15.010s 00:21:20.714 user 0m39.046s 00:21:20.714 sys 0m2.686s 00:21:20.714 08:55:28 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.714 08:55:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:20.714 08:55:28 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:20.714 08:55:28 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:20.714 08:55:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.714 08:55:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.714 08:55:28 -- common/autotest_common.sh@10 -- # set +x 00:21:20.974 ************************************ 00:21:20.974 START TEST keyring_linux 00:21:20.974 ************************************ 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:20.974 Joined session keyring: 477835297 00:21:20.974 * Looking for test storage... 00:21:20.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:20.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.974 --rc genhtml_branch_coverage=1 00:21:20.974 --rc genhtml_function_coverage=1 00:21:20.974 --rc genhtml_legend=1 00:21:20.974 --rc geninfo_all_blocks=1 00:21:20.974 --rc geninfo_unexecuted_blocks=1 00:21:20.974 00:21:20.974 ' 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:20.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.974 --rc genhtml_branch_coverage=1 00:21:20.974 --rc genhtml_function_coverage=1 00:21:20.974 --rc genhtml_legend=1 00:21:20.974 --rc geninfo_all_blocks=1 00:21:20.974 --rc geninfo_unexecuted_blocks=1 00:21:20.974 00:21:20.974 ' 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:20.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.974 --rc genhtml_branch_coverage=1 00:21:20.974 --rc genhtml_function_coverage=1 00:21:20.974 --rc genhtml_legend=1 00:21:20.974 --rc geninfo_all_blocks=1 00:21:20.974 --rc geninfo_unexecuted_blocks=1 00:21:20.974 00:21:20.974 ' 00:21:20.974 08:55:28 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:20.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.974 --rc genhtml_branch_coverage=1 00:21:20.974 --rc genhtml_function_coverage=1 00:21:20.974 --rc genhtml_legend=1 00:21:20.974 --rc geninfo_all_blocks=1 00:21:20.974 --rc geninfo_unexecuted_blocks=1 00:21:20.974 00:21:20.974 ' 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:20.974 08:55:28 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19057b12-55d1-482d-ac95-8c26bd7da4ce 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=19057b12-55d1-482d-ac95-8c26bd7da4ce 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.974 08:55:28 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.974 08:55:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.974 08:55:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.974 08:55:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.974 08:55:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:20.974 08:55:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.974 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.974 08:55:28 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.974 08:55:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:20.974 08:55:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:20.974 08:55:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:20.974 08:55:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:20.974 08:55:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:20.974 08:55:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:20.975 08:55:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:20.975 08:55:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:20.975 08:55:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:20.975 08:55:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:20.975 08:55:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:20.975 08:55:28 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:20.975 08:55:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:20.975 08:55:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:21.233 /tmp/:spdk-test:key0 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:21.233 08:55:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:21.233 08:55:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:21.233 08:55:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:21.233 08:55:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:21.233 08:55:28 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:21.233 08:55:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:21.233 08:55:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:21.233 /tmp/:spdk-test:key1 00:21:21.233 08:55:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:21.233 08:55:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86289 00:21:21.233 08:55:28 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.233 08:55:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86289 00:21:21.233 08:55:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86289 ']' 00:21:21.233 08:55:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.233 08:55:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.233 08:55:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.233 08:55:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.233 08:55:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:21.234 [2024-12-11 08:55:28.879442] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:21:21.234 [2024-12-11 08:55:28.879582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86289 ] 00:21:21.492 [2024-12-11 08:55:29.025650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.492 [2024-12-11 08:55:29.055570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.492 [2024-12-11 08:55:29.092282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:22.058 08:55:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.058 08:55:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:22.058 08:55:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:22.058 08:55:29 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:22.317 [2024-12-11 08:55:29.835124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.317 null0 00:21:22.317 [2024-12-11 08:55:29.867099] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.317 [2024-12-11 08:55:29.867282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.317 08:55:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:22.317 601773567 00:21:22.317 08:55:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:22.317 1036731351 00:21:22.317 08:55:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86307 00:21:22.317 08:55:29 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:22.317 08:55:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86307 /var/tmp/bperf.sock 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86307 ']' 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.317 08:55:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:22.317 [2024-12-11 08:55:29.938110] Starting SPDK v25.01-pre git sha1 97b0ef63e / DPDK 24.03.0 initialization... 00:21:22.317 [2024-12-11 08:55:29.938208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86307 ] 00:21:22.317 [2024-12-11 08:55:30.077449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.575 [2024-12-11 08:55:30.110750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.575 08:55:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.575 08:55:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:22.575 08:55:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:22.575 08:55:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:22.833 08:55:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:22.833 08:55:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:23.092 [2024-12-11 08:55:30.804900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.092 08:55:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:23.092 08:55:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:23.351 [2024-12-11 08:55:31.089959] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.610 nvme0n1 00:21:23.610 08:55:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:23.610 08:55:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:23.610 08:55:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:23.610 08:55:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:23.610 08:55:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:23.610 08:55:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:23.870 08:55:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:23.870 08:55:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:23.870 08:55:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:23.870 08:55:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:23.870 08:55:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:23.870 08:55:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:23.870 08:55:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@25 -- # sn=601773567 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@26 -- # [[ 601773567 == \6\0\1\7\7\3\5\6\7 ]] 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 601773567 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:24.165 08:55:31 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.165 Running I/O for 1 seconds... 00:21:25.125 12359.00 IOPS, 48.28 MiB/s 00:21:25.125 Latency(us) 00:21:25.125 [2024-12-11T08:55:32.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.125 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:25.125 nvme0n1 : 1.01 12366.88 48.31 0.00 0.00 10295.02 7864.32 16801.05 00:21:25.125 [2024-12-11T08:55:32.899Z] =================================================================================================================== 00:21:25.125 [2024-12-11T08:55:32.899Z] Total : 12366.88 48.31 0.00 0.00 10295.02 7864.32 16801.05 00:21:25.125 { 00:21:25.125 "results": [ 00:21:25.125 { 00:21:25.125 "job": "nvme0n1", 00:21:25.125 "core_mask": "0x2", 00:21:25.125 "workload": "randread", 00:21:25.125 "status": "finished", 00:21:25.125 "queue_depth": 128, 00:21:25.125 "io_size": 4096, 00:21:25.125 "runtime": 1.009875, 00:21:25.125 "iops": 12366.877088748608, 00:21:25.125 "mibps": 48.30811362792425, 00:21:25.125 "io_failed": 0, 00:21:25.125 "io_timeout": 0, 00:21:25.125 "avg_latency_us": 10295.018225492979, 00:21:25.125 "min_latency_us": 7864.32, 00:21:25.125 "max_latency_us": 16801.04727272727 00:21:25.125 } 00:21:25.125 ], 00:21:25.125 "core_count": 1 00:21:25.125 } 00:21:25.125 08:55:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:25.125 08:55:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:25.693 08:55:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:25.693 08:55:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:25.693 08:55:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:25.693 08:55:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:25.693 08:55:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:25.693 08:55:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.952 08:55:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:25.952 08:55:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:25.952 08:55:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:25.952 08:55:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.952 08:55:33 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:25.952 08:55:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:26.212 [2024-12-11 08:55:33.780833] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:26.212 [2024-12-11 08:55:33.781322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f0b90 (107): Transport endpoint is not connected 00:21:26.212 [2024-12-11 08:55:33.782313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f0b90 (9): Bad file descriptor 00:21:26.212 [2024-12-11 08:55:33.783311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:26.212 [2024-12-11 08:55:33.783340] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:26.212 [2024-12-11 08:55:33.783351] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:26.212 [2024-12-11 08:55:33.783362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:26.212 request: 00:21:26.212 { 00:21:26.212 "name": "nvme0", 00:21:26.212 "trtype": "tcp", 00:21:26.212 "traddr": "127.0.0.1", 00:21:26.212 "adrfam": "ipv4", 00:21:26.212 "trsvcid": "4420", 00:21:26.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:26.212 "prchk_reftag": false, 00:21:26.212 "prchk_guard": false, 00:21:26.212 "hdgst": false, 00:21:26.212 "ddgst": false, 00:21:26.212 "psk": ":spdk-test:key1", 00:21:26.212 "allow_unrecognized_csi": false, 00:21:26.212 "method": "bdev_nvme_attach_controller", 00:21:26.212 "req_id": 1 00:21:26.212 } 00:21:26.212 Got JSON-RPC error response 00:21:26.212 response: 00:21:26.212 { 00:21:26.212 "code": -5, 00:21:26.212 "message": "Input/output error" 00:21:26.212 } 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@33 -- # sn=601773567 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 601773567 00:21:26.212 1 links removed 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@33 -- # sn=1036731351 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1036731351 00:21:26.212 1 links removed 00:21:26.212 08:55:33 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86307 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86307 ']' 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86307 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86307 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.212 killing process with pid 86307 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86307' 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@973 -- # kill 86307 00:21:26.212 Received shutdown signal, test time was about 1.000000 seconds 00:21:26.212 00:21:26.212 Latency(us) 00:21:26.212 [2024-12-11T08:55:33.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.212 [2024-12-11T08:55:33.986Z] =================================================================================================================== 00:21:26.212 [2024-12-11T08:55:33.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.212 08:55:33 keyring_linux -- common/autotest_common.sh@978 -- # wait 86307 00:21:26.472 08:55:33 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86289 00:21:26.472 08:55:33 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86289 ']' 00:21:26.472 08:55:33 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86289 00:21:26.472 08:55:33 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:26.472 08:55:33 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.472 08:55:33 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86289 00:21:26.472 08:55:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.472 08:55:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.472 killing process with pid 86289 00:21:26.472 08:55:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86289' 00:21:26.472 08:55:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 86289 00:21:26.472 08:55:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 86289 00:21:26.731 ************************************ 00:21:26.731 END TEST keyring_linux 00:21:26.731 ************************************ 00:21:26.731 00:21:26.731 real 0m5.757s 00:21:26.731 user 0m11.540s 00:21:26.731 sys 0m1.348s 00:21:26.731 08:55:34 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.731 08:55:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:26.731 08:55:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:26.731 08:55:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:26.731 08:55:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:26.731 08:55:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:26.731 08:55:34 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:26.731 08:55:34 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:26.731 08:55:34 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:26.731 08:55:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.731 08:55:34 -- common/autotest_common.sh@10 -- # set +x 00:21:26.731 08:55:34 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:26.731 08:55:34 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:26.731 08:55:34 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:26.731 08:55:34 -- common/autotest_common.sh@10 -- # set +x 00:21:28.635 INFO: APP EXITING 00:21:28.635 INFO: killing all VMs 00:21:28.635 INFO: killing vhost app 00:21:28.635 INFO: EXIT DONE 00:21:29.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:29.203 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:29.203 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:29.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.031 Cleaning 00:21:30.031 Removing: /var/run/dpdk/spdk0/config 00:21:30.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:30.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:30.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:30.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:30.031 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:30.031 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:30.031 Removing: /var/run/dpdk/spdk1/config 00:21:30.031 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:30.031 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:30.031 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:30.031 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:30.031 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:30.031 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:30.031 Removing: /var/run/dpdk/spdk2/config 00:21:30.031 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:30.031 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:30.031 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:30.031 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:30.031 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:30.031 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:30.031 Removing: /var/run/dpdk/spdk3/config 00:21:30.031 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:30.031 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:30.031 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:30.031 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:30.031 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:30.031 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:30.031 Removing: /var/run/dpdk/spdk4/config 00:21:30.031 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:30.031 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:30.031 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:30.031 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:30.031 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:30.031 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:30.031 Removing: /dev/shm/nvmf_trace.0 00:21:30.031 Removing: /dev/shm/spdk_tgt_trace.pid57884 00:21:30.031 Removing: /var/run/dpdk/spdk0 00:21:30.031 Removing: /var/run/dpdk/spdk1 00:21:30.031 Removing: /var/run/dpdk/spdk2 00:21:30.031 Removing: /var/run/dpdk/spdk3 00:21:30.031 Removing: /var/run/dpdk/spdk4 00:21:30.031 Removing: /var/run/dpdk/spdk_pid57737 00:21:30.031 Removing: /var/run/dpdk/spdk_pid57884 00:21:30.031 Removing: /var/run/dpdk/spdk_pid58077 00:21:30.031 Removing: /var/run/dpdk/spdk_pid58164 00:21:30.031 Removing: /var/run/dpdk/spdk_pid58191 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58295 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58306 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58440 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58635 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58784 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58862 00:21:30.032 Removing: /var/run/dpdk/spdk_pid58933 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59032 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59104 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59137 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59171 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59242 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59327 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59767 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59806 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59844 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59858 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59912 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59915 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59969 00:21:30.032 Removing: /var/run/dpdk/spdk_pid59983 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60023 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60041 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60081 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60099 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60235 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60265 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60348 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60674 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60686 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60723 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60736 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60752 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60771 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60784 00:21:30.032 Removing: /var/run/dpdk/spdk_pid60800 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60819 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60832 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60848 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60867 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60875 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60896 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60915 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60923 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60938 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60957 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60971 00:21:30.291 Removing: /var/run/dpdk/spdk_pid60986 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61017 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61030 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61060 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61132 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61155 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61169 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61193 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61202 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61210 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61247 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61266 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61289 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61298 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61308 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61312 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61327 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61331 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61337 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61350 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61373 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61405 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61409 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61443 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61447 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61449 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61495 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61501 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61533 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61535 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61548 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61550 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61553 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61565 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61567 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61580 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61651 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61706 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61820 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61853 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61893 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61907 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61929 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61944 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61975 00:21:30.291 Removing: /var/run/dpdk/spdk_pid61991 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62072 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62083 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62127 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62195 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62246 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62277 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62372 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62414 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62447 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62673 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62771 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62794 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62823 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62857 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62890 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62924 00:21:30.291 Removing: /var/run/dpdk/spdk_pid62950 00:21:30.291 Removing: /var/run/dpdk/spdk_pid63348 00:21:30.291 Removing: /var/run/dpdk/spdk_pid63383 00:21:30.291 Removing: /var/run/dpdk/spdk_pid63719 00:21:30.291 Removing: /var/run/dpdk/spdk_pid64189 00:21:30.291 Removing: /var/run/dpdk/spdk_pid64451 00:21:30.291 Removing: /var/run/dpdk/spdk_pid65282 00:21:30.291 Removing: /var/run/dpdk/spdk_pid66201 00:21:30.291 Removing: /var/run/dpdk/spdk_pid66313 00:21:30.291 Removing: /var/run/dpdk/spdk_pid66386 00:21:30.550 Removing: /var/run/dpdk/spdk_pid67796 00:21:30.550 Removing: /var/run/dpdk/spdk_pid68109 00:21:30.550 Removing: /var/run/dpdk/spdk_pid71827 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72181 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72293 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72420 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72441 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72462 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72483 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72562 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72691 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72826 00:21:30.550 Removing: /var/run/dpdk/spdk_pid72895 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73076 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73159 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73244 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73590 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73996 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73997 00:21:30.550 Removing: /var/run/dpdk/spdk_pid73998 00:21:30.550 Removing: /var/run/dpdk/spdk_pid74252 00:21:30.550 Removing: /var/run/dpdk/spdk_pid74510 00:21:30.550 Removing: /var/run/dpdk/spdk_pid74887 00:21:30.550 Removing: /var/run/dpdk/spdk_pid74889 00:21:30.550 Removing: /var/run/dpdk/spdk_pid75220 00:21:30.550 Removing: /var/run/dpdk/spdk_pid75238 00:21:30.551 Removing: /var/run/dpdk/spdk_pid75252 00:21:30.551 Removing: /var/run/dpdk/spdk_pid75285 00:21:30.551 Removing: /var/run/dpdk/spdk_pid75290 00:21:30.551 Removing: /var/run/dpdk/spdk_pid75633 00:21:30.551 Removing: /var/run/dpdk/spdk_pid75682 00:21:30.551 Removing: /var/run/dpdk/spdk_pid76014 00:21:30.551 Removing: /var/run/dpdk/spdk_pid76204 00:21:30.551 Removing: /var/run/dpdk/spdk_pid76621 00:21:30.551 Removing: /var/run/dpdk/spdk_pid77161 00:21:30.551 Removing: /var/run/dpdk/spdk_pid78042 00:21:30.551 Removing: /var/run/dpdk/spdk_pid78671 00:21:30.551 Removing: /var/run/dpdk/spdk_pid78679 00:21:30.551 Removing: /var/run/dpdk/spdk_pid80694 00:21:30.551 Removing: /var/run/dpdk/spdk_pid80741 00:21:30.551 Removing: /var/run/dpdk/spdk_pid80794 00:21:30.551 Removing: /var/run/dpdk/spdk_pid80842 00:21:30.551 Removing: /var/run/dpdk/spdk_pid80942 00:21:30.551 Removing: /var/run/dpdk/spdk_pid80995 00:21:30.551 Removing: /var/run/dpdk/spdk_pid81042 00:21:30.551 Removing: /var/run/dpdk/spdk_pid81095 00:21:30.551 Removing: /var/run/dpdk/spdk_pid81447 00:21:30.551 Removing: /var/run/dpdk/spdk_pid82659 00:21:30.551 Removing: /var/run/dpdk/spdk_pid82805 00:21:30.551 Removing: /var/run/dpdk/spdk_pid83049 00:21:30.551 Removing: /var/run/dpdk/spdk_pid83649 00:21:30.551 Removing: /var/run/dpdk/spdk_pid83809 00:21:30.551 Removing: /var/run/dpdk/spdk_pid83969 00:21:30.551 Removing: /var/run/dpdk/spdk_pid84066 00:21:30.551 Removing: /var/run/dpdk/spdk_pid84225 00:21:30.551 Removing: /var/run/dpdk/spdk_pid84334 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85042 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85077 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85118 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85362 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85403 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85437 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85907 00:21:30.551 Removing: /var/run/dpdk/spdk_pid85912 00:21:30.551 Removing: /var/run/dpdk/spdk_pid86166 00:21:30.551 Removing: /var/run/dpdk/spdk_pid86289 00:21:30.551 Removing: /var/run/dpdk/spdk_pid86307 00:21:30.551 Clean 00:21:30.551 08:55:38 -- common/autotest_common.sh@1453 -- # return 0 00:21:30.551 08:55:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:30.551 08:55:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.551 08:55:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.810 08:55:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:30.810 08:55:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.810 08:55:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.810 08:55:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:30.810 08:55:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:30.810 08:55:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:30.810 08:55:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:30.810 08:55:38 -- spdk/autotest.sh@398 -- # hostname 00:21:30.810 08:55:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:31.069 geninfo: WARNING: invalid characters removed from testname! 00:21:57.618 08:56:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.523 08:56:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.059 08:56:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.643 08:56:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:07.177 08:56:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:10.464 08:56:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.998 08:56:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:12.998 08:56:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:12.998 08:56:20 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:12.998 08:56:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:12.998 08:56:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:12.998 08:56:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:12.998 + [[ -n 5259 ]] 00:22:12.998 + sudo kill 5259 00:22:13.008 [Pipeline] } 00:22:13.021 [Pipeline] // timeout 00:22:13.027 [Pipeline] } 00:22:13.040 [Pipeline] // stage 00:22:13.045 [Pipeline] } 00:22:13.059 [Pipeline] // catchError 00:22:13.085 [Pipeline] stage 00:22:13.088 [Pipeline] { (Stop VM) 00:22:13.102 [Pipeline] sh 00:22:13.380 + vagrant halt 00:22:17.571 ==> default: Halting domain... 00:22:22.853 [Pipeline] sh 00:22:23.134 + vagrant destroy -f 00:22:26.432 ==> default: Removing domain... 00:22:26.444 [Pipeline] sh 00:22:26.725 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:26.734 [Pipeline] } 00:22:26.748 [Pipeline] // stage 00:22:26.753 [Pipeline] } 00:22:26.767 [Pipeline] // dir 00:22:26.773 [Pipeline] } 00:22:26.787 [Pipeline] // wrap 00:22:26.793 [Pipeline] } 00:22:26.806 [Pipeline] // catchError 00:22:26.816 [Pipeline] stage 00:22:26.819 [Pipeline] { (Epilogue) 00:22:26.832 [Pipeline] sh 00:22:27.114 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:33.703 [Pipeline] catchError 00:22:33.707 [Pipeline] { 00:22:33.722 [Pipeline] sh 00:22:34.004 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:34.263 Artifacts sizes are good 00:22:34.272 [Pipeline] } 00:22:34.286 [Pipeline] // catchError 00:22:34.296 [Pipeline] archiveArtifacts 00:22:34.303 Archiving artifacts 00:22:34.424 [Pipeline] cleanWs 00:22:34.435 [WS-CLEANUP] Deleting project workspace... 00:22:34.435 [WS-CLEANUP] Deferred wipeout is used... 00:22:34.441 [WS-CLEANUP] done 00:22:34.443 [Pipeline] } 00:22:34.457 [Pipeline] // stage 00:22:34.463 [Pipeline] } 00:22:34.476 [Pipeline] // node 00:22:34.481 [Pipeline] End of Pipeline 00:22:34.569 Finished: SUCCESS